00:00:00.000 Started by upstream project "autotest-per-patch" build number 132394 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.018 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:02.690 The recommended git tool is: git 00:00:02.690 using credential 00000000-0000-0000-0000-000000000002 00:00:02.692 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:02.707 Fetching changes from the remote Git repository 00:00:02.710 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:02.726 Using shallow fetch with depth 1 00:00:02.726 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:02.726 > git --version # timeout=10 00:00:02.740 > git --version # 'git version 2.39.2' 00:00:02.740 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:02.754 Setting http proxy: proxy-dmz.intel.com:911 00:00:02.754 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.349 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.367 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.383 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.383 > git config core.sparsecheckout # timeout=10 00:00:08.399 > git read-tree -mu HEAD # timeout=10 00:00:08.418 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.443 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.444 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.541 [Pipeline] Start of Pipeline 00:00:08.555 [Pipeline] library 00:00:08.557 Loading library shm_lib@master 00:00:08.557 Library shm_lib@master is cached. Copying from home. 00:00:08.571 [Pipeline] node 00:00:23.573 Still waiting to schedule task 00:00:23.574 Waiting for next available executor on ‘vagrant-vm-host’ 00:13:50.724 Running on VM-host-SM16 in /var/jenkins/workspace/raid-vg-autotest 00:13:50.726 [Pipeline] { 00:13:50.744 [Pipeline] catchError 00:13:50.747 [Pipeline] { 00:13:50.761 [Pipeline] wrap 00:13:50.773 [Pipeline] { 00:13:50.783 [Pipeline] stage 00:13:50.786 [Pipeline] { (Prologue) 00:13:50.807 [Pipeline] echo 00:13:50.809 Node: VM-host-SM16 00:13:50.816 [Pipeline] cleanWs 00:13:50.826 [WS-CLEANUP] Deleting project workspace... 00:13:50.826 [WS-CLEANUP] Deferred wipeout is used... 00:13:50.833 [WS-CLEANUP] done 00:13:51.065 [Pipeline] setCustomBuildProperty 00:13:51.161 [Pipeline] httpRequest 00:13:51.507 [Pipeline] echo 00:13:51.510 Sorcerer 10.211.164.20 is alive 00:13:51.522 [Pipeline] retry 00:13:51.524 [Pipeline] { 00:13:51.539 [Pipeline] httpRequest 00:13:51.544 HttpMethod: GET 00:13:51.545 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:13:51.545 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:13:51.546 Response Code: HTTP/1.1 200 OK 00:13:51.546 Success: Status code 200 is in the accepted range: 200,404 00:13:51.547 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:13:51.692 [Pipeline] } 00:13:51.709 [Pipeline] // retry 00:13:51.717 [Pipeline] sh 00:13:51.998 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:13:52.015 [Pipeline] httpRequest 00:13:52.332 [Pipeline] echo 00:13:52.334 Sorcerer 10.211.164.20 is alive 00:13:52.345 [Pipeline] retry 00:13:52.347 [Pipeline] { 00:13:52.362 [Pipeline] httpRequest 00:13:52.367 HttpMethod: GET 00:13:52.367 URL: http://10.211.164.20/packages/spdk_fa4f4fd150bbe9c9b90aa563cd283fd351d5655a.tar.gz 00:13:52.368 Sending request to url: http://10.211.164.20/packages/spdk_fa4f4fd150bbe9c9b90aa563cd283fd351d5655a.tar.gz 00:13:52.369 Response Code: HTTP/1.1 200 OK 00:13:52.369 Success: Status code 200 is in the accepted range: 200,404 00:13:52.370 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_fa4f4fd150bbe9c9b90aa563cd283fd351d5655a.tar.gz 00:13:56.114 [Pipeline] } 00:13:56.137 [Pipeline] // retry 00:13:56.146 [Pipeline] sh 00:13:56.437 + tar --no-same-owner -xf spdk_fa4f4fd150bbe9c9b90aa563cd283fd351d5655a.tar.gz 00:13:59.731 [Pipeline] sh 00:14:00.011 + git -C spdk log --oneline -n5 00:14:00.011 fa4f4fd15 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:14:00.011 b1f0bbae7 nvmf: Expose DIF type of namespace to host again 00:14:00.011 f9d18d578 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:14:00.011 a361eb5e2 nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:14:00.011 4ab755590 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:14:00.031 [Pipeline] writeFile 00:14:00.046 [Pipeline] sh 00:14:00.325 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:14:00.338 [Pipeline] sh 00:14:00.623 + cat autorun-spdk.conf 00:14:00.623 SPDK_RUN_FUNCTIONAL_TEST=1 00:14:00.623 SPDK_RUN_ASAN=1 00:14:00.623 SPDK_RUN_UBSAN=1 00:14:00.623 SPDK_TEST_RAID=1 00:14:00.623 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:00.630 RUN_NIGHTLY=0 00:14:00.632 [Pipeline] } 00:14:00.648 [Pipeline] // stage 00:14:00.663 [Pipeline] stage 00:14:00.665 [Pipeline] { (Run VM) 00:14:00.677 [Pipeline] sh 00:14:00.956 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:14:00.956 + echo 'Start stage prepare_nvme.sh' 00:14:00.956 Start stage prepare_nvme.sh 00:14:00.956 + [[ -n 5 ]] 00:14:00.956 + disk_prefix=ex5 00:14:00.956 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:14:00.956 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:14:00.956 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:14:00.956 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:14:00.956 ++ SPDK_RUN_ASAN=1 00:14:00.956 ++ SPDK_RUN_UBSAN=1 00:14:00.956 ++ SPDK_TEST_RAID=1 00:14:00.956 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:00.956 ++ RUN_NIGHTLY=0 00:14:00.956 + cd /var/jenkins/workspace/raid-vg-autotest 00:14:00.956 + nvme_files=() 00:14:00.956 + declare -A nvme_files 00:14:00.956 + backend_dir=/var/lib/libvirt/images/backends 00:14:00.956 + nvme_files['nvme.img']=5G 00:14:00.956 + nvme_files['nvme-cmb.img']=5G 00:14:00.956 + nvme_files['nvme-multi0.img']=4G 00:14:00.956 + nvme_files['nvme-multi1.img']=4G 00:14:00.956 + nvme_files['nvme-multi2.img']=4G 00:14:00.956 + nvme_files['nvme-openstack.img']=8G 00:14:00.956 + nvme_files['nvme-zns.img']=5G 00:14:00.956 + (( SPDK_TEST_NVME_PMR == 1 )) 00:14:00.956 + (( SPDK_TEST_FTL == 1 )) 00:14:00.956 + (( SPDK_TEST_NVME_FDP == 1 )) 00:14:00.956 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:14:00.956 + for nvme in "${!nvme_files[@]}" 00:14:00.956 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:14:00.956 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:14:00.956 + for nvme in "${!nvme_files[@]}" 00:14:00.956 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:14:00.956 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:14:00.957 + for nvme in "${!nvme_files[@]}" 00:14:00.957 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:14:00.957 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:14:00.957 + for nvme in "${!nvme_files[@]}" 00:14:00.957 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:14:00.957 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:14:00.957 + for nvme in "${!nvme_files[@]}" 00:14:00.957 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:14:00.957 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:14:00.957 + for nvme in "${!nvme_files[@]}" 00:14:00.957 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:14:00.957 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:14:00.957 + for nvme in "${!nvme_files[@]}" 00:14:00.957 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:14:00.957 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:14:00.957 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:14:00.957 + echo 'End stage prepare_nvme.sh' 00:14:00.957 End stage prepare_nvme.sh 00:14:00.968 [Pipeline] sh 00:14:01.248 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:14:01.248 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:14:01.248 00:14:01.248 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:14:01.248 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:14:01.248 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:14:01.248 HELP=0 00:14:01.248 DRY_RUN=0 00:14:01.248 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:14:01.248 NVME_DISKS_TYPE=nvme,nvme, 00:14:01.248 NVME_AUTO_CREATE=0 00:14:01.248 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:14:01.248 NVME_CMB=,, 00:14:01.248 NVME_PMR=,, 00:14:01.248 NVME_ZNS=,, 00:14:01.248 NVME_MS=,, 00:14:01.248 NVME_FDP=,, 00:14:01.248 SPDK_VAGRANT_DISTRO=fedora39 00:14:01.248 SPDK_VAGRANT_VMCPU=10 00:14:01.248 SPDK_VAGRANT_VMRAM=12288 00:14:01.248 SPDK_VAGRANT_PROVIDER=libvirt 00:14:01.248 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:14:01.248 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:14:01.248 SPDK_OPENSTACK_NETWORK=0 00:14:01.248 VAGRANT_PACKAGE_BOX=0 00:14:01.248 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:14:01.248 FORCE_DISTRO=true 00:14:01.248 VAGRANT_BOX_VERSION= 00:14:01.248 EXTRA_VAGRANTFILES= 00:14:01.248 NIC_MODEL=e1000 00:14:01.248 00:14:01.248 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:14:01.248 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:14:04.617 Bringing machine 'default' up with 'libvirt' provider... 00:14:04.875 ==> default: Creating image (snapshot of base box volume). 00:14:05.134 ==> default: Creating domain with the following settings... 00:14:05.134 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732109647_4db7337df9ed8d9959ca 00:14:05.134 ==> default: -- Domain type: kvm 00:14:05.134 ==> default: -- Cpus: 10 00:14:05.134 ==> default: -- Feature: acpi 00:14:05.134 ==> default: -- Feature: apic 00:14:05.134 ==> default: -- Feature: pae 00:14:05.134 ==> default: -- Memory: 12288M 00:14:05.134 ==> default: -- Memory Backing: hugepages: 00:14:05.134 ==> default: -- Management MAC: 00:14:05.134 ==> default: -- Loader: 00:14:05.134 ==> default: -- Nvram: 00:14:05.134 ==> default: -- Base box: spdk/fedora39 00:14:05.134 ==> default: -- Storage pool: default 00:14:05.134 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732109647_4db7337df9ed8d9959ca.img (20G) 00:14:05.134 ==> default: -- Volume Cache: default 00:14:05.134 ==> default: -- Kernel: 00:14:05.134 ==> default: -- Initrd: 00:14:05.134 ==> default: -- Graphics Type: vnc 00:14:05.134 ==> default: -- Graphics Port: -1 00:14:05.134 ==> default: -- Graphics IP: 127.0.0.1 00:14:05.134 ==> default: -- Graphics Password: Not defined 00:14:05.134 ==> default: -- Video Type: cirrus 00:14:05.134 ==> default: -- Video VRAM: 9216 00:14:05.134 ==> default: -- Sound Type: 00:14:05.134 ==> default: -- Keymap: en-us 00:14:05.134 ==> default: -- TPM Path: 00:14:05.134 ==> default: -- INPUT: type=mouse, bus=ps2 00:14:05.134 ==> default: -- Command line args: 00:14:05.134 ==> default: -> value=-device, 00:14:05.134 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:14:05.134 ==> default: -> value=-drive, 00:14:05.134 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:14:05.134 ==> default: -> value=-device, 00:14:05.134 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:05.134 ==> default: -> value=-device, 00:14:05.134 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:14:05.134 ==> default: -> value=-drive, 00:14:05.134 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:14:05.134 ==> default: -> value=-device, 00:14:05.134 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:05.134 ==> default: -> value=-drive, 00:14:05.134 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:14:05.134 ==> default: -> value=-device, 00:14:05.134 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:05.134 ==> default: -> value=-drive, 00:14:05.134 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:14:05.134 ==> default: -> value=-device, 00:14:05.134 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:14:05.393 ==> default: Creating shared folders metadata... 00:14:05.393 ==> default: Starting domain. 00:14:06.768 ==> default: Waiting for domain to get an IP address... 00:14:24.888 ==> default: Waiting for SSH to become available... 00:14:24.888 ==> default: Configuring and enabling network interfaces... 00:14:28.173 default: SSH address: 192.168.121.86:22 00:14:28.173 default: SSH username: vagrant 00:14:28.173 default: SSH auth method: private key 00:14:30.706 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:14:38.818 ==> default: Mounting SSHFS shared folder... 00:14:39.753 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:14:39.753 ==> default: Checking Mount.. 00:14:41.128 ==> default: Folder Successfully Mounted! 00:14:41.128 ==> default: Running provisioner: file... 00:14:41.695 default: ~/.gitconfig => .gitconfig 00:14:42.263 00:14:42.263 SUCCESS! 00:14:42.263 00:14:42.263 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:14:42.263 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:14:42.263 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:14:42.263 00:14:42.273 [Pipeline] } 00:14:42.288 [Pipeline] // stage 00:14:42.298 [Pipeline] dir 00:14:42.298 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:14:42.300 [Pipeline] { 00:14:42.313 [Pipeline] catchError 00:14:42.315 [Pipeline] { 00:14:42.327 [Pipeline] sh 00:14:42.627 + vagrant ssh-config --host vagrant 00:14:42.627 + sed -ne /^Host/,$p 00:14:42.628 + tee ssh_conf 00:14:46.853 Host vagrant 00:14:46.853 HostName 192.168.121.86 00:14:46.853 User vagrant 00:14:46.853 Port 22 00:14:46.853 UserKnownHostsFile /dev/null 00:14:46.853 StrictHostKeyChecking no 00:14:46.853 PasswordAuthentication no 00:14:46.853 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:14:46.853 IdentitiesOnly yes 00:14:46.853 LogLevel FATAL 00:14:46.853 ForwardAgent yes 00:14:46.853 ForwardX11 yes 00:14:46.853 00:14:46.868 [Pipeline] withEnv 00:14:46.870 [Pipeline] { 00:14:46.882 [Pipeline] sh 00:14:47.158 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:14:47.158 source /etc/os-release 00:14:47.158 [[ -e /image.version ]] && img=$(< /image.version) 00:14:47.158 # Minimal, systemd-like check. 00:14:47.158 if [[ -e /.dockerenv ]]; then 00:14:47.158 # Clear garbage from the node's name: 00:14:47.158 # agt-er_autotest_547-896 -> autotest_547-896 00:14:47.158 # $HOSTNAME is the actual container id 00:14:47.158 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:14:47.158 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:14:47.158 # We can assume this is a mount from a host where container is running, 00:14:47.158 # so fetch its hostname to easily identify the target swarm worker. 00:14:47.158 container="$(< /etc/hostname) ($agent)" 00:14:47.158 else 00:14:47.158 # Fallback 00:14:47.158 container=$agent 00:14:47.158 fi 00:14:47.158 fi 00:14:47.158 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:14:47.158 00:14:47.428 [Pipeline] } 00:14:47.445 [Pipeline] // withEnv 00:14:47.454 [Pipeline] setCustomBuildProperty 00:14:47.469 [Pipeline] stage 00:14:47.471 [Pipeline] { (Tests) 00:14:47.491 [Pipeline] sh 00:14:47.772 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:14:48.045 [Pipeline] sh 00:14:48.415 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:14:48.432 [Pipeline] timeout 00:14:48.433 Timeout set to expire in 1 hr 30 min 00:14:48.435 [Pipeline] { 00:14:48.451 [Pipeline] sh 00:14:48.729 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:14:49.295 HEAD is now at fa4f4fd15 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:14:49.307 [Pipeline] sh 00:14:49.585 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:14:49.856 [Pipeline] sh 00:14:50.135 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:14:50.408 [Pipeline] sh 00:14:50.687 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:14:50.945 ++ readlink -f spdk_repo 00:14:50.945 + DIR_ROOT=/home/vagrant/spdk_repo 00:14:50.945 + [[ -n /home/vagrant/spdk_repo ]] 00:14:50.945 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:14:50.945 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:14:50.945 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:14:50.945 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:14:50.945 + [[ -d /home/vagrant/spdk_repo/output ]] 00:14:50.945 + [[ raid-vg-autotest == pkgdep-* ]] 00:14:50.945 + cd /home/vagrant/spdk_repo 00:14:50.945 + source /etc/os-release 00:14:50.945 ++ NAME='Fedora Linux' 00:14:50.945 ++ VERSION='39 (Cloud Edition)' 00:14:50.945 ++ ID=fedora 00:14:50.945 ++ VERSION_ID=39 00:14:50.945 ++ VERSION_CODENAME= 00:14:50.945 ++ PLATFORM_ID=platform:f39 00:14:50.945 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:14:50.945 ++ ANSI_COLOR='0;38;2;60;110;180' 00:14:50.945 ++ LOGO=fedora-logo-icon 00:14:50.945 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:14:50.945 ++ HOME_URL=https://fedoraproject.org/ 00:14:50.945 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:14:50.945 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:14:50.945 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:14:50.945 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:14:50.945 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:14:50.945 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:14:50.945 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:14:50.945 ++ SUPPORT_END=2024-11-12 00:14:50.945 ++ VARIANT='Cloud Edition' 00:14:50.945 ++ VARIANT_ID=cloud 00:14:50.945 + uname -a 00:14:50.945 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:14:50.945 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:14:51.512 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:51.512 Hugepages 00:14:51.512 node hugesize free / total 00:14:51.512 node0 1048576kB 0 / 0 00:14:51.513 node0 2048kB 0 / 0 00:14:51.513 00:14:51.513 Type BDF Vendor Device NUMA Driver Device Block devices 00:14:51.513 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:14:51.513 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:14:51.513 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:14:51.513 + rm -f /tmp/spdk-ld-path 00:14:51.513 + source autorun-spdk.conf 00:14:51.513 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:14:51.513 ++ SPDK_RUN_ASAN=1 00:14:51.513 ++ SPDK_RUN_UBSAN=1 00:14:51.513 ++ SPDK_TEST_RAID=1 00:14:51.513 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:51.513 ++ RUN_NIGHTLY=0 00:14:51.513 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:14:51.513 + [[ -n '' ]] 00:14:51.513 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:14:51.513 + for M in /var/spdk/build-*-manifest.txt 00:14:51.513 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:14:51.513 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:14:51.513 + for M in /var/spdk/build-*-manifest.txt 00:14:51.513 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:14:51.513 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:14:51.513 + for M in /var/spdk/build-*-manifest.txt 00:14:51.513 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:14:51.513 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:14:51.513 ++ uname 00:14:51.513 + [[ Linux == \L\i\n\u\x ]] 00:14:51.513 + sudo dmesg -T 00:14:51.513 + sudo dmesg --clear 00:14:51.513 + dmesg_pid=5374 00:14:51.513 + sudo dmesg -Tw 00:14:51.513 + [[ Fedora Linux == FreeBSD ]] 00:14:51.513 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:51.513 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:51.513 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:14:51.513 + [[ -x /usr/src/fio-static/fio ]] 00:14:51.513 + export FIO_BIN=/usr/src/fio-static/fio 00:14:51.513 + FIO_BIN=/usr/src/fio-static/fio 00:14:51.513 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:14:51.513 + [[ ! -v VFIO_QEMU_BIN ]] 00:14:51.513 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:14:51.513 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:51.513 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:51.513 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:14:51.513 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:51.513 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:51.513 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:14:51.513 13:34:54 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:14:51.513 13:34:54 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:14:51.513 13:34:54 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:14:51.513 13:34:54 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:14:51.513 13:34:54 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:14:51.513 13:34:54 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:14:51.513 13:34:54 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:51.513 13:34:54 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:14:51.513 13:34:54 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:14:51.513 13:34:54 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:14:51.771 13:34:54 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:14:51.771 13:34:54 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:51.771 13:34:54 -- scripts/common.sh@15 -- $ shopt -s extglob 00:14:51.771 13:34:54 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:14:51.771 13:34:54 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.771 13:34:54 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.771 13:34:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.771 13:34:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.771 13:34:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.771 13:34:54 -- paths/export.sh@5 -- $ export PATH 00:14:51.772 13:34:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.772 13:34:54 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:14:51.772 13:34:54 -- common/autobuild_common.sh@493 -- $ date +%s 00:14:51.772 13:34:54 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732109694.XXXXXX 00:14:51.772 13:34:54 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732109694.FqGej7 00:14:51.772 13:34:54 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:14:51.772 13:34:54 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:14:51.772 13:34:54 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:14:51.772 13:34:54 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:14:51.772 13:34:54 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:14:51.772 13:34:54 -- common/autobuild_common.sh@509 -- $ get_config_params 00:14:51.772 13:34:54 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:14:51.772 13:34:54 -- common/autotest_common.sh@10 -- $ set +x 00:14:51.772 13:34:54 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:14:51.772 13:34:54 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:14:51.772 13:34:54 -- pm/common@17 -- $ local monitor 00:14:51.772 13:34:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:51.772 13:34:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:51.772 13:34:54 -- pm/common@25 -- $ sleep 1 00:14:51.772 13:34:54 -- pm/common@21 -- $ date +%s 00:14:51.772 13:34:54 -- pm/common@21 -- $ date +%s 00:14:51.772 13:34:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732109694 00:14:51.772 13:34:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732109694 00:14:51.772 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732109694_collect-cpu-load.pm.log 00:14:51.772 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732109694_collect-vmstat.pm.log 00:14:52.755 13:34:55 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:14:52.755 13:34:55 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:14:52.755 13:34:55 -- spdk/autobuild.sh@12 -- $ umask 022 00:14:52.755 13:34:55 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:14:52.755 13:34:55 -- spdk/autobuild.sh@16 -- $ date -u 00:14:52.755 Wed Nov 20 01:34:55 PM UTC 2024 00:14:52.755 13:34:55 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:14:52.755 v25.01-pre-251-gfa4f4fd15 00:14:52.755 13:34:55 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:14:52.755 13:34:55 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:14:52.755 13:34:55 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:14:52.755 13:34:55 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:14:52.755 13:34:55 -- common/autotest_common.sh@10 -- $ set +x 00:14:52.755 ************************************ 00:14:52.755 START TEST asan 00:14:52.755 ************************************ 00:14:52.755 using asan 00:14:52.755 13:34:55 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:14:52.755 00:14:52.755 real 0m0.000s 00:14:52.755 user 0m0.000s 00:14:52.755 sys 0m0.000s 00:14:52.755 13:34:55 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:14:52.755 13:34:55 asan -- common/autotest_common.sh@10 -- $ set +x 00:14:52.755 ************************************ 00:14:52.755 END TEST asan 00:14:52.755 ************************************ 00:14:52.755 13:34:55 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:14:52.755 13:34:55 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:14:52.755 13:34:55 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:14:52.755 13:34:55 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:14:52.755 13:34:55 -- common/autotest_common.sh@10 -- $ set +x 00:14:52.755 ************************************ 00:14:52.755 START TEST ubsan 00:14:52.755 ************************************ 00:14:52.755 using ubsan 00:14:52.755 13:34:55 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:14:52.755 00:14:52.755 real 0m0.000s 00:14:52.755 user 0m0.000s 00:14:52.755 sys 0m0.000s 00:14:52.755 13:34:55 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:14:52.756 ************************************ 00:14:52.756 END TEST ubsan 00:14:52.756 ************************************ 00:14:52.756 13:34:55 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:14:52.756 13:34:55 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:14:52.756 13:34:55 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:14:52.756 13:34:55 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:14:52.756 13:34:55 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:14:52.756 13:34:55 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:14:52.756 13:34:55 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:14:52.756 13:34:55 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:14:52.756 13:34:55 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:14:52.756 13:34:55 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:14:53.014 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:53.014 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:14:53.580 Using 'verbs' RDMA provider 00:15:09.416 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:15:21.636 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:15:21.636 Creating mk/config.mk...done. 00:15:21.636 Creating mk/cc.flags.mk...done. 00:15:21.636 Type 'make' to build. 00:15:21.636 13:35:24 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:15:21.636 13:35:24 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:15:21.636 13:35:24 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:15:21.636 13:35:24 -- common/autotest_common.sh@10 -- $ set +x 00:15:21.636 ************************************ 00:15:21.636 START TEST make 00:15:21.636 ************************************ 00:15:21.636 13:35:24 make -- common/autotest_common.sh@1129 -- $ make -j10 00:15:21.894 make[1]: Nothing to be done for 'all'. 00:15:34.092 The Meson build system 00:15:34.092 Version: 1.5.0 00:15:34.092 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:15:34.092 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:15:34.092 Build type: native build 00:15:34.092 Program cat found: YES (/usr/bin/cat) 00:15:34.092 Project name: DPDK 00:15:34.092 Project version: 24.03.0 00:15:34.092 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:15:34.092 C linker for the host machine: cc ld.bfd 2.40-14 00:15:34.092 Host machine cpu family: x86_64 00:15:34.092 Host machine cpu: x86_64 00:15:34.092 Message: ## Building in Developer Mode ## 00:15:34.092 Program pkg-config found: YES (/usr/bin/pkg-config) 00:15:34.092 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:15:34.092 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:15:34.092 Program python3 found: YES (/usr/bin/python3) 00:15:34.092 Program cat found: YES (/usr/bin/cat) 00:15:34.092 Compiler for C supports arguments -march=native: YES 00:15:34.092 Checking for size of "void *" : 8 00:15:34.092 Checking for size of "void *" : 8 (cached) 00:15:34.092 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:15:34.092 Library m found: YES 00:15:34.092 Library numa found: YES 00:15:34.092 Has header "numaif.h" : YES 00:15:34.092 Library fdt found: NO 00:15:34.092 Library execinfo found: NO 00:15:34.092 Has header "execinfo.h" : YES 00:15:34.092 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:15:34.092 Run-time dependency libarchive found: NO (tried pkgconfig) 00:15:34.092 Run-time dependency libbsd found: NO (tried pkgconfig) 00:15:34.092 Run-time dependency jansson found: NO (tried pkgconfig) 00:15:34.092 Run-time dependency openssl found: YES 3.1.1 00:15:34.092 Run-time dependency libpcap found: YES 1.10.4 00:15:34.092 Has header "pcap.h" with dependency libpcap: YES 00:15:34.092 Compiler for C supports arguments -Wcast-qual: YES 00:15:34.092 Compiler for C supports arguments -Wdeprecated: YES 00:15:34.093 Compiler for C supports arguments -Wformat: YES 00:15:34.093 Compiler for C supports arguments -Wformat-nonliteral: NO 00:15:34.093 Compiler for C supports arguments -Wformat-security: NO 00:15:34.093 Compiler for C supports arguments -Wmissing-declarations: YES 00:15:34.093 Compiler for C supports arguments -Wmissing-prototypes: YES 00:15:34.093 Compiler for C supports arguments -Wnested-externs: YES 00:15:34.093 Compiler for C supports arguments -Wold-style-definition: YES 00:15:34.093 Compiler for C supports arguments -Wpointer-arith: YES 00:15:34.093 Compiler for C supports arguments -Wsign-compare: YES 00:15:34.093 Compiler for C supports arguments -Wstrict-prototypes: YES 00:15:34.093 Compiler for C supports arguments -Wundef: YES 00:15:34.093 Compiler for C supports arguments -Wwrite-strings: YES 00:15:34.093 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:15:34.093 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:15:34.093 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:15:34.093 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:15:34.093 Program objdump found: YES (/usr/bin/objdump) 00:15:34.093 Compiler for C supports arguments -mavx512f: YES 00:15:34.093 Checking if "AVX512 checking" compiles: YES 00:15:34.093 Fetching value of define "__SSE4_2__" : 1 00:15:34.093 Fetching value of define "__AES__" : 1 00:15:34.093 Fetching value of define "__AVX__" : 1 00:15:34.093 Fetching value of define "__AVX2__" : 1 00:15:34.093 Fetching value of define "__AVX512BW__" : (undefined) 00:15:34.093 Fetching value of define "__AVX512CD__" : (undefined) 00:15:34.093 Fetching value of define "__AVX512DQ__" : (undefined) 00:15:34.093 Fetching value of define "__AVX512F__" : (undefined) 00:15:34.093 Fetching value of define "__AVX512VL__" : (undefined) 00:15:34.093 Fetching value of define "__PCLMUL__" : 1 00:15:34.093 Fetching value of define "__RDRND__" : 1 00:15:34.093 Fetching value of define "__RDSEED__" : 1 00:15:34.093 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:15:34.093 Fetching value of define "__znver1__" : (undefined) 00:15:34.093 Fetching value of define "__znver2__" : (undefined) 00:15:34.093 Fetching value of define "__znver3__" : (undefined) 00:15:34.093 Fetching value of define "__znver4__" : (undefined) 00:15:34.093 Library asan found: YES 00:15:34.093 Compiler for C supports arguments -Wno-format-truncation: YES 00:15:34.093 Message: lib/log: Defining dependency "log" 00:15:34.093 Message: lib/kvargs: Defining dependency "kvargs" 00:15:34.093 Message: lib/telemetry: Defining dependency "telemetry" 00:15:34.093 Library rt found: YES 00:15:34.093 Checking for function "getentropy" : NO 00:15:34.093 Message: lib/eal: Defining dependency "eal" 00:15:34.093 Message: lib/ring: Defining dependency "ring" 00:15:34.093 Message: lib/rcu: Defining dependency "rcu" 00:15:34.093 Message: lib/mempool: Defining dependency "mempool" 00:15:34.093 Message: lib/mbuf: Defining dependency "mbuf" 00:15:34.093 Fetching value of define "__PCLMUL__" : 1 (cached) 00:15:34.093 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:15:34.093 Compiler for C supports arguments -mpclmul: YES 00:15:34.093 Compiler for C supports arguments -maes: YES 00:15:34.093 Compiler for C supports arguments -mavx512f: YES (cached) 00:15:34.093 Compiler for C supports arguments -mavx512bw: YES 00:15:34.093 Compiler for C supports arguments -mavx512dq: YES 00:15:34.093 Compiler for C supports arguments -mavx512vl: YES 00:15:34.093 Compiler for C supports arguments -mvpclmulqdq: YES 00:15:34.093 Compiler for C supports arguments -mavx2: YES 00:15:34.093 Compiler for C supports arguments -mavx: YES 00:15:34.093 Message: lib/net: Defining dependency "net" 00:15:34.093 Message: lib/meter: Defining dependency "meter" 00:15:34.093 Message: lib/ethdev: Defining dependency "ethdev" 00:15:34.093 Message: lib/pci: Defining dependency "pci" 00:15:34.093 Message: lib/cmdline: Defining dependency "cmdline" 00:15:34.093 Message: lib/hash: Defining dependency "hash" 00:15:34.093 Message: lib/timer: Defining dependency "timer" 00:15:34.093 Message: lib/compressdev: Defining dependency "compressdev" 00:15:34.093 Message: lib/cryptodev: Defining dependency "cryptodev" 00:15:34.093 Message: lib/dmadev: Defining dependency "dmadev" 00:15:34.093 Compiler for C supports arguments -Wno-cast-qual: YES 00:15:34.093 Message: lib/power: Defining dependency "power" 00:15:34.093 Message: lib/reorder: Defining dependency "reorder" 00:15:34.093 Message: lib/security: Defining dependency "security" 00:15:34.093 Has header "linux/userfaultfd.h" : YES 00:15:34.093 Has header "linux/vduse.h" : YES 00:15:34.093 Message: lib/vhost: Defining dependency "vhost" 00:15:34.093 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:15:34.093 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:15:34.093 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:15:34.093 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:15:34.093 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:15:34.093 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:15:34.093 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:15:34.093 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:15:34.093 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:15:34.093 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:15:34.093 Program doxygen found: YES (/usr/local/bin/doxygen) 00:15:34.093 Configuring doxy-api-html.conf using configuration 00:15:34.093 Configuring doxy-api-man.conf using configuration 00:15:34.093 Program mandb found: YES (/usr/bin/mandb) 00:15:34.093 Program sphinx-build found: NO 00:15:34.093 Configuring rte_build_config.h using configuration 00:15:34.093 Message: 00:15:34.093 ================= 00:15:34.093 Applications Enabled 00:15:34.093 ================= 00:15:34.093 00:15:34.093 apps: 00:15:34.093 00:15:34.093 00:15:34.093 Message: 00:15:34.093 ================= 00:15:34.093 Libraries Enabled 00:15:34.093 ================= 00:15:34.093 00:15:34.093 libs: 00:15:34.093 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:15:34.093 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:15:34.093 cryptodev, dmadev, power, reorder, security, vhost, 00:15:34.093 00:15:34.093 Message: 00:15:34.093 =============== 00:15:34.093 Drivers Enabled 00:15:34.093 =============== 00:15:34.093 00:15:34.093 common: 00:15:34.093 00:15:34.093 bus: 00:15:34.093 pci, vdev, 00:15:34.093 mempool: 00:15:34.093 ring, 00:15:34.093 dma: 00:15:34.093 00:15:34.093 net: 00:15:34.093 00:15:34.093 crypto: 00:15:34.093 00:15:34.093 compress: 00:15:34.093 00:15:34.093 vdpa: 00:15:34.093 00:15:34.093 00:15:34.093 Message: 00:15:34.093 ================= 00:15:34.093 Content Skipped 00:15:34.093 ================= 00:15:34.093 00:15:34.093 apps: 00:15:34.093 dumpcap: explicitly disabled via build config 00:15:34.093 graph: explicitly disabled via build config 00:15:34.093 pdump: explicitly disabled via build config 00:15:34.093 proc-info: explicitly disabled via build config 00:15:34.093 test-acl: explicitly disabled via build config 00:15:34.093 test-bbdev: explicitly disabled via build config 00:15:34.093 test-cmdline: explicitly disabled via build config 00:15:34.093 test-compress-perf: explicitly disabled via build config 00:15:34.093 test-crypto-perf: explicitly disabled via build config 00:15:34.093 test-dma-perf: explicitly disabled via build config 00:15:34.093 test-eventdev: explicitly disabled via build config 00:15:34.093 test-fib: explicitly disabled via build config 00:15:34.093 test-flow-perf: explicitly disabled via build config 00:15:34.093 test-gpudev: explicitly disabled via build config 00:15:34.093 test-mldev: explicitly disabled via build config 00:15:34.093 test-pipeline: explicitly disabled via build config 00:15:34.093 test-pmd: explicitly disabled via build config 00:15:34.093 test-regex: explicitly disabled via build config 00:15:34.093 test-sad: explicitly disabled via build config 00:15:34.093 test-security-perf: explicitly disabled via build config 00:15:34.093 00:15:34.093 libs: 00:15:34.093 argparse: explicitly disabled via build config 00:15:34.093 metrics: explicitly disabled via build config 00:15:34.093 acl: explicitly disabled via build config 00:15:34.093 bbdev: explicitly disabled via build config 00:15:34.093 bitratestats: explicitly disabled via build config 00:15:34.093 bpf: explicitly disabled via build config 00:15:34.093 cfgfile: explicitly disabled via build config 00:15:34.093 distributor: explicitly disabled via build config 00:15:34.093 efd: explicitly disabled via build config 00:15:34.093 eventdev: explicitly disabled via build config 00:15:34.093 dispatcher: explicitly disabled via build config 00:15:34.093 gpudev: explicitly disabled via build config 00:15:34.093 gro: explicitly disabled via build config 00:15:34.093 gso: explicitly disabled via build config 00:15:34.093 ip_frag: explicitly disabled via build config 00:15:34.093 jobstats: explicitly disabled via build config 00:15:34.093 latencystats: explicitly disabled via build config 00:15:34.093 lpm: explicitly disabled via build config 00:15:34.093 member: explicitly disabled via build config 00:15:34.093 pcapng: explicitly disabled via build config 00:15:34.093 rawdev: explicitly disabled via build config 00:15:34.093 regexdev: explicitly disabled via build config 00:15:34.093 mldev: explicitly disabled via build config 00:15:34.093 rib: explicitly disabled via build config 00:15:34.093 sched: explicitly disabled via build config 00:15:34.093 stack: explicitly disabled via build config 00:15:34.093 ipsec: explicitly disabled via build config 00:15:34.093 pdcp: explicitly disabled via build config 00:15:34.093 fib: explicitly disabled via build config 00:15:34.093 port: explicitly disabled via build config 00:15:34.093 pdump: explicitly disabled via build config 00:15:34.093 table: explicitly disabled via build config 00:15:34.093 pipeline: explicitly disabled via build config 00:15:34.093 graph: explicitly disabled via build config 00:15:34.093 node: explicitly disabled via build config 00:15:34.093 00:15:34.093 drivers: 00:15:34.093 common/cpt: not in enabled drivers build config 00:15:34.094 common/dpaax: not in enabled drivers build config 00:15:34.094 common/iavf: not in enabled drivers build config 00:15:34.094 common/idpf: not in enabled drivers build config 00:15:34.094 common/ionic: not in enabled drivers build config 00:15:34.094 common/mvep: not in enabled drivers build config 00:15:34.094 common/octeontx: not in enabled drivers build config 00:15:34.094 bus/auxiliary: not in enabled drivers build config 00:15:34.094 bus/cdx: not in enabled drivers build config 00:15:34.094 bus/dpaa: not in enabled drivers build config 00:15:34.094 bus/fslmc: not in enabled drivers build config 00:15:34.094 bus/ifpga: not in enabled drivers build config 00:15:34.094 bus/platform: not in enabled drivers build config 00:15:34.094 bus/uacce: not in enabled drivers build config 00:15:34.094 bus/vmbus: not in enabled drivers build config 00:15:34.094 common/cnxk: not in enabled drivers build config 00:15:34.094 common/mlx5: not in enabled drivers build config 00:15:34.094 common/nfp: not in enabled drivers build config 00:15:34.094 common/nitrox: not in enabled drivers build config 00:15:34.094 common/qat: not in enabled drivers build config 00:15:34.094 common/sfc_efx: not in enabled drivers build config 00:15:34.094 mempool/bucket: not in enabled drivers build config 00:15:34.094 mempool/cnxk: not in enabled drivers build config 00:15:34.094 mempool/dpaa: not in enabled drivers build config 00:15:34.094 mempool/dpaa2: not in enabled drivers build config 00:15:34.094 mempool/octeontx: not in enabled drivers build config 00:15:34.094 mempool/stack: not in enabled drivers build config 00:15:34.094 dma/cnxk: not in enabled drivers build config 00:15:34.094 dma/dpaa: not in enabled drivers build config 00:15:34.094 dma/dpaa2: not in enabled drivers build config 00:15:34.094 dma/hisilicon: not in enabled drivers build config 00:15:34.094 dma/idxd: not in enabled drivers build config 00:15:34.094 dma/ioat: not in enabled drivers build config 00:15:34.094 dma/skeleton: not in enabled drivers build config 00:15:34.094 net/af_packet: not in enabled drivers build config 00:15:34.094 net/af_xdp: not in enabled drivers build config 00:15:34.094 net/ark: not in enabled drivers build config 00:15:34.094 net/atlantic: not in enabled drivers build config 00:15:34.094 net/avp: not in enabled drivers build config 00:15:34.094 net/axgbe: not in enabled drivers build config 00:15:34.094 net/bnx2x: not in enabled drivers build config 00:15:34.094 net/bnxt: not in enabled drivers build config 00:15:34.094 net/bonding: not in enabled drivers build config 00:15:34.094 net/cnxk: not in enabled drivers build config 00:15:34.094 net/cpfl: not in enabled drivers build config 00:15:34.094 net/cxgbe: not in enabled drivers build config 00:15:34.094 net/dpaa: not in enabled drivers build config 00:15:34.094 net/dpaa2: not in enabled drivers build config 00:15:34.094 net/e1000: not in enabled drivers build config 00:15:34.094 net/ena: not in enabled drivers build config 00:15:34.094 net/enetc: not in enabled drivers build config 00:15:34.094 net/enetfec: not in enabled drivers build config 00:15:34.094 net/enic: not in enabled drivers build config 00:15:34.094 net/failsafe: not in enabled drivers build config 00:15:34.094 net/fm10k: not in enabled drivers build config 00:15:34.094 net/gve: not in enabled drivers build config 00:15:34.094 net/hinic: not in enabled drivers build config 00:15:34.094 net/hns3: not in enabled drivers build config 00:15:34.094 net/i40e: not in enabled drivers build config 00:15:34.094 net/iavf: not in enabled drivers build config 00:15:34.094 net/ice: not in enabled drivers build config 00:15:34.094 net/idpf: not in enabled drivers build config 00:15:34.094 net/igc: not in enabled drivers build config 00:15:34.094 net/ionic: not in enabled drivers build config 00:15:34.094 net/ipn3ke: not in enabled drivers build config 00:15:34.094 net/ixgbe: not in enabled drivers build config 00:15:34.094 net/mana: not in enabled drivers build config 00:15:34.094 net/memif: not in enabled drivers build config 00:15:34.094 net/mlx4: not in enabled drivers build config 00:15:34.094 net/mlx5: not in enabled drivers build config 00:15:34.094 net/mvneta: not in enabled drivers build config 00:15:34.094 net/mvpp2: not in enabled drivers build config 00:15:34.094 net/netvsc: not in enabled drivers build config 00:15:34.094 net/nfb: not in enabled drivers build config 00:15:34.094 net/nfp: not in enabled drivers build config 00:15:34.094 net/ngbe: not in enabled drivers build config 00:15:34.094 net/null: not in enabled drivers build config 00:15:34.094 net/octeontx: not in enabled drivers build config 00:15:34.094 net/octeon_ep: not in enabled drivers build config 00:15:34.094 net/pcap: not in enabled drivers build config 00:15:34.094 net/pfe: not in enabled drivers build config 00:15:34.094 net/qede: not in enabled drivers build config 00:15:34.094 net/ring: not in enabled drivers build config 00:15:34.094 net/sfc: not in enabled drivers build config 00:15:34.094 net/softnic: not in enabled drivers build config 00:15:34.094 net/tap: not in enabled drivers build config 00:15:34.094 net/thunderx: not in enabled drivers build config 00:15:34.094 net/txgbe: not in enabled drivers build config 00:15:34.094 net/vdev_netvsc: not in enabled drivers build config 00:15:34.094 net/vhost: not in enabled drivers build config 00:15:34.094 net/virtio: not in enabled drivers build config 00:15:34.094 net/vmxnet3: not in enabled drivers build config 00:15:34.094 raw/*: missing internal dependency, "rawdev" 00:15:34.094 crypto/armv8: not in enabled drivers build config 00:15:34.094 crypto/bcmfs: not in enabled drivers build config 00:15:34.094 crypto/caam_jr: not in enabled drivers build config 00:15:34.094 crypto/ccp: not in enabled drivers build config 00:15:34.094 crypto/cnxk: not in enabled drivers build config 00:15:34.094 crypto/dpaa_sec: not in enabled drivers build config 00:15:34.094 crypto/dpaa2_sec: not in enabled drivers build config 00:15:34.094 crypto/ipsec_mb: not in enabled drivers build config 00:15:34.094 crypto/mlx5: not in enabled drivers build config 00:15:34.094 crypto/mvsam: not in enabled drivers build config 00:15:34.094 crypto/nitrox: not in enabled drivers build config 00:15:34.094 crypto/null: not in enabled drivers build config 00:15:34.094 crypto/octeontx: not in enabled drivers build config 00:15:34.094 crypto/openssl: not in enabled drivers build config 00:15:34.094 crypto/scheduler: not in enabled drivers build config 00:15:34.094 crypto/uadk: not in enabled drivers build config 00:15:34.094 crypto/virtio: not in enabled drivers build config 00:15:34.094 compress/isal: not in enabled drivers build config 00:15:34.094 compress/mlx5: not in enabled drivers build config 00:15:34.094 compress/nitrox: not in enabled drivers build config 00:15:34.094 compress/octeontx: not in enabled drivers build config 00:15:34.094 compress/zlib: not in enabled drivers build config 00:15:34.094 regex/*: missing internal dependency, "regexdev" 00:15:34.094 ml/*: missing internal dependency, "mldev" 00:15:34.094 vdpa/ifc: not in enabled drivers build config 00:15:34.094 vdpa/mlx5: not in enabled drivers build config 00:15:34.094 vdpa/nfp: not in enabled drivers build config 00:15:34.094 vdpa/sfc: not in enabled drivers build config 00:15:34.094 event/*: missing internal dependency, "eventdev" 00:15:34.094 baseband/*: missing internal dependency, "bbdev" 00:15:34.094 gpu/*: missing internal dependency, "gpudev" 00:15:34.094 00:15:34.094 00:15:34.677 Build targets in project: 85 00:15:34.677 00:15:34.677 DPDK 24.03.0 00:15:34.677 00:15:34.677 User defined options 00:15:34.677 buildtype : debug 00:15:34.677 default_library : shared 00:15:34.677 libdir : lib 00:15:34.677 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:15:34.677 b_sanitize : address 00:15:34.677 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:15:34.677 c_link_args : 00:15:34.677 cpu_instruction_set: native 00:15:34.677 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:15:34.677 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:15:34.677 enable_docs : false 00:15:34.677 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:15:34.677 enable_kmods : false 00:15:34.677 max_lcores : 128 00:15:34.677 tests : false 00:15:34.677 00:15:34.677 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:15:35.244 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:15:35.244 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:15:35.244 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:15:35.244 [3/268] Linking static target lib/librte_kvargs.a 00:15:35.502 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:15:35.502 [5/268] Linking static target lib/librte_log.a 00:15:35.502 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:15:36.069 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:15:36.069 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:15:36.069 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:15:36.329 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:15:36.329 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:15:36.329 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:15:36.329 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:15:36.329 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:15:36.329 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:15:36.329 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:15:36.602 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:15:36.602 [18/268] Linking static target lib/librte_telemetry.a 00:15:36.602 [19/268] Linking target lib/librte_log.so.24.1 00:15:36.602 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:15:36.877 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:15:36.877 [22/268] Linking target lib/librte_kvargs.so.24.1 00:15:37.135 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:15:37.135 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:15:37.135 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:15:37.135 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:15:37.393 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:15:37.393 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:15:37.393 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:15:37.394 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:15:37.652 [31/268] Linking target lib/librte_telemetry.so.24.1 00:15:37.652 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:15:37.652 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:15:37.652 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:15:37.652 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:15:37.909 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:15:37.909 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:15:38.167 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:15:38.167 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:15:38.167 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:15:38.424 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:15:38.424 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:15:38.424 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:15:38.424 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:15:38.682 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:15:38.682 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:15:38.682 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:15:38.939 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:15:38.939 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:15:38.939 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:15:39.196 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:15:39.196 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:15:39.454 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:15:39.712 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:15:39.712 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:15:39.712 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:15:39.712 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:15:39.712 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:15:39.712 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:15:39.970 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:15:39.970 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:15:39.970 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:15:40.229 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:15:40.488 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:15:40.488 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:15:40.488 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:15:40.747 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:15:40.747 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:15:41.006 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:15:41.006 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:15:41.006 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:15:41.006 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:15:41.006 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:15:41.283 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:15:41.283 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:15:41.284 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:15:41.284 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:15:41.284 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:15:41.284 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:15:41.851 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:15:41.851 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:15:41.851 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:15:41.851 [83/268] Linking static target lib/librte_ring.a 00:15:41.851 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:15:41.851 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:15:42.110 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:15:42.110 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:15:42.110 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:15:42.110 [89/268] Linking static target lib/librte_eal.a 00:15:42.369 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:15:42.369 [91/268] Linking static target lib/librte_rcu.a 00:15:42.369 [92/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:15:42.369 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:15:42.369 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:15:42.369 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:15:42.627 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:15:42.627 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:15:42.886 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:15:42.886 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:15:42.886 [100/268] Linking static target lib/librte_mempool.a 00:15:42.886 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:15:43.146 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:15:43.146 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:15:43.146 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:15:43.146 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:15:43.146 [106/268] Linking static target lib/librte_mbuf.a 00:15:43.405 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:15:43.405 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:15:43.405 [109/268] Linking static target lib/librte_net.a 00:15:43.405 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:15:43.405 [111/268] Linking static target lib/librte_meter.a 00:15:43.663 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:15:43.922 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:15:43.922 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:15:43.922 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:15:43.922 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:15:44.181 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:15:44.181 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:15:44.439 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:15:44.439 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:15:45.006 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:15:45.006 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:15:45.264 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:15:45.264 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:15:45.523 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:15:45.523 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:15:45.523 [127/268] Linking static target lib/librte_pci.a 00:15:45.523 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:15:45.523 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:15:45.782 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:15:45.782 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:15:45.783 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:15:45.783 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:15:46.042 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:15:46.042 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:15:46.042 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:15:46.042 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:15:46.042 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:15:46.042 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:15:46.042 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:15:46.042 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:15:46.042 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:15:46.301 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:15:46.301 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:15:46.301 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:15:46.301 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:15:46.560 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:15:46.560 [148/268] Linking static target lib/librte_cmdline.a 00:15:46.818 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:15:47.076 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:15:47.076 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:15:47.076 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:15:47.077 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:15:47.077 [154/268] Linking static target lib/librte_timer.a 00:15:47.643 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:15:47.643 [156/268] Linking static target lib/librte_ethdev.a 00:15:47.643 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:15:47.643 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:15:47.643 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:15:47.902 [160/268] Linking static target lib/librte_hash.a 00:15:47.902 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:15:47.902 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:15:47.902 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:15:47.902 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:15:47.902 [165/268] Linking static target lib/librte_compressdev.a 00:15:48.165 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:15:48.431 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:15:48.431 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:15:48.431 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:15:48.432 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:15:48.691 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:15:48.691 [172/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:15:48.691 [173/268] Linking static target lib/librte_dmadev.a 00:15:48.950 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:48.950 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:15:49.209 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:15:49.209 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:15:49.469 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:15:49.469 [179/268] Linking static target lib/librte_cryptodev.a 00:15:49.469 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:15:49.728 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:15:49.728 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:15:49.728 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:15:49.728 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:49.986 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:15:49.986 [186/268] Linking static target lib/librte_power.a 00:15:50.244 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:15:50.244 [188/268] Linking static target lib/librte_reorder.a 00:15:50.503 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:15:50.503 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:15:50.503 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:15:50.503 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:15:50.503 [193/268] Linking static target lib/librte_security.a 00:15:51.068 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:15:51.068 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:15:51.326 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:15:51.326 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:15:51.584 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:15:51.842 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:15:51.842 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:15:52.101 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:15:52.101 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:52.101 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:15:52.101 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:15:52.359 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:15:52.618 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:15:52.618 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:15:52.877 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:15:52.877 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:15:52.877 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:15:52.877 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:15:53.139 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:15:53.139 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:15:53.139 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:15:53.139 [215/268] Linking static target drivers/librte_bus_vdev.a 00:15:53.139 [216/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:15:53.139 [217/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:15:53.139 [218/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:15:53.139 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:15:53.139 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:15:53.139 [221/268] Linking static target drivers/librte_bus_pci.a 00:15:53.398 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:15:53.398 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:15:53.398 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:15:53.398 [225/268] Linking static target drivers/librte_mempool_ring.a 00:15:53.398 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:53.657 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:15:54.593 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:15:54.851 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:15:54.851 [230/268] Linking target lib/librte_eal.so.24.1 00:15:54.851 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:15:55.109 [232/268] Linking target lib/librte_ring.so.24.1 00:15:55.110 [233/268] Linking target lib/librte_dmadev.so.24.1 00:15:55.110 [234/268] Linking target lib/librte_timer.so.24.1 00:15:55.110 [235/268] Linking target lib/librte_meter.so.24.1 00:15:55.110 [236/268] Linking target lib/librte_pci.so.24.1 00:15:55.110 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:15:55.110 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:15:55.110 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:15:55.110 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:15:55.110 [241/268] Linking target lib/librte_rcu.so.24.1 00:15:55.368 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:15:55.368 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:15:55.368 [244/268] Linking target lib/librte_mempool.so.24.1 00:15:55.368 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:15:55.368 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:15:55.368 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:15:55.368 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:15:55.627 [249/268] Linking target lib/librte_mbuf.so.24.1 00:15:55.627 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:15:55.627 [251/268] Linking target lib/librte_cryptodev.so.24.1 00:15:55.627 [252/268] Linking target lib/librte_compressdev.so.24.1 00:15:55.627 [253/268] Linking target lib/librte_net.so.24.1 00:15:55.627 [254/268] Linking target lib/librte_reorder.so.24.1 00:15:55.886 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:15:55.886 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:15:55.886 [257/268] Linking target lib/librte_hash.so.24.1 00:15:55.886 [258/268] Linking target lib/librte_security.so.24.1 00:15:55.886 [259/268] Linking target lib/librte_cmdline.so.24.1 00:15:56.145 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:15:56.145 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:56.145 [262/268] Linking target lib/librte_ethdev.so.24.1 00:15:56.405 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:15:56.405 [264/268] Linking target lib/librte_power.so.24.1 00:15:58.966 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:15:58.966 [266/268] Linking static target lib/librte_vhost.a 00:16:00.867 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:16:00.867 [268/268] Linking target lib/librte_vhost.so.24.1 00:16:00.867 INFO: autodetecting backend as ninja 00:16:00.867 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:16:22.790 CC lib/ut_mock/mock.o 00:16:22.790 CC lib/ut/ut.o 00:16:22.790 CC lib/log/log_flags.o 00:16:22.790 CC lib/log/log.o 00:16:22.790 CC lib/log/log_deprecated.o 00:16:22.790 LIB libspdk_ut.a 00:16:22.790 LIB libspdk_ut_mock.a 00:16:22.790 SO libspdk_ut.so.2.0 00:16:22.790 SO libspdk_ut_mock.so.6.0 00:16:22.790 LIB libspdk_log.a 00:16:22.790 SYMLINK libspdk_ut.so 00:16:22.790 SYMLINK libspdk_ut_mock.so 00:16:22.790 SO libspdk_log.so.7.1 00:16:22.790 SYMLINK libspdk_log.so 00:16:22.790 CC lib/util/base64.o 00:16:22.790 CC lib/dma/dma.o 00:16:22.790 CC lib/util/bit_array.o 00:16:22.790 CC lib/util/cpuset.o 00:16:22.790 CC lib/util/crc16.o 00:16:22.790 CC lib/util/crc32c.o 00:16:22.790 CC lib/util/crc32.o 00:16:22.790 CC lib/ioat/ioat.o 00:16:22.790 CXX lib/trace_parser/trace.o 00:16:22.790 CC lib/vfio_user/host/vfio_user_pci.o 00:16:22.790 CC lib/util/crc32_ieee.o 00:16:22.790 CC lib/util/crc64.o 00:16:22.790 CC lib/util/dif.o 00:16:22.790 CC lib/util/fd.o 00:16:22.790 LIB libspdk_dma.a 00:16:22.790 CC lib/util/fd_group.o 00:16:22.790 SO libspdk_dma.so.5.0 00:16:22.790 CC lib/util/file.o 00:16:22.790 CC lib/util/hexlify.o 00:16:22.790 CC lib/vfio_user/host/vfio_user.o 00:16:23.049 SYMLINK libspdk_dma.so 00:16:23.049 CC lib/util/iov.o 00:16:23.049 CC lib/util/math.o 00:16:23.049 CC lib/util/net.o 00:16:23.049 CC lib/util/pipe.o 00:16:23.049 CC lib/util/strerror_tls.o 00:16:23.049 LIB libspdk_ioat.a 00:16:23.049 SO libspdk_ioat.so.7.0 00:16:23.049 CC lib/util/string.o 00:16:23.049 LIB libspdk_vfio_user.a 00:16:23.049 CC lib/util/uuid.o 00:16:23.309 SO libspdk_vfio_user.so.5.0 00:16:23.309 CC lib/util/xor.o 00:16:23.309 CC lib/util/zipf.o 00:16:23.309 SYMLINK libspdk_ioat.so 00:16:23.309 SYMLINK libspdk_vfio_user.so 00:16:23.309 CC lib/util/md5.o 00:16:23.875 LIB libspdk_util.a 00:16:23.875 SO libspdk_util.so.10.1 00:16:24.134 LIB libspdk_trace_parser.a 00:16:24.134 SYMLINK libspdk_util.so 00:16:24.134 SO libspdk_trace_parser.so.6.0 00:16:24.392 SYMLINK libspdk_trace_parser.so 00:16:24.392 CC lib/env_dpdk/env.o 00:16:24.392 CC lib/conf/conf.o 00:16:24.392 CC lib/env_dpdk/memory.o 00:16:24.392 CC lib/env_dpdk/pci.o 00:16:24.392 CC lib/env_dpdk/init.o 00:16:24.392 CC lib/env_dpdk/threads.o 00:16:24.392 CC lib/idxd/idxd.o 00:16:24.392 CC lib/json/json_parse.o 00:16:24.392 CC lib/vmd/vmd.o 00:16:24.392 CC lib/rdma_utils/rdma_utils.o 00:16:24.392 CC lib/env_dpdk/pci_ioat.o 00:16:24.651 LIB libspdk_conf.a 00:16:24.651 CC lib/json/json_util.o 00:16:24.651 SO libspdk_conf.so.6.0 00:16:24.651 CC lib/vmd/led.o 00:16:24.651 LIB libspdk_rdma_utils.a 00:16:24.651 SYMLINK libspdk_conf.so 00:16:24.651 CC lib/env_dpdk/pci_virtio.o 00:16:24.651 SO libspdk_rdma_utils.so.1.0 00:16:24.651 CC lib/json/json_write.o 00:16:24.651 SYMLINK libspdk_rdma_utils.so 00:16:24.909 CC lib/env_dpdk/pci_vmd.o 00:16:24.909 CC lib/env_dpdk/pci_idxd.o 00:16:24.909 CC lib/env_dpdk/pci_event.o 00:16:24.909 CC lib/env_dpdk/sigbus_handler.o 00:16:24.909 CC lib/env_dpdk/pci_dpdk.o 00:16:24.909 CC lib/env_dpdk/pci_dpdk_2207.o 00:16:24.909 CC lib/env_dpdk/pci_dpdk_2211.o 00:16:24.909 CC lib/idxd/idxd_user.o 00:16:25.168 CC lib/idxd/idxd_kernel.o 00:16:25.168 CC lib/rdma_provider/common.o 00:16:25.168 LIB libspdk_json.a 00:16:25.168 CC lib/rdma_provider/rdma_provider_verbs.o 00:16:25.168 SO libspdk_json.so.6.0 00:16:25.168 LIB libspdk_vmd.a 00:16:25.168 SYMLINK libspdk_json.so 00:16:25.168 SO libspdk_vmd.so.6.0 00:16:25.427 SYMLINK libspdk_vmd.so 00:16:25.427 LIB libspdk_idxd.a 00:16:25.427 SO libspdk_idxd.so.12.1 00:16:25.427 SYMLINK libspdk_idxd.so 00:16:25.427 CC lib/jsonrpc/jsonrpc_server.o 00:16:25.427 CC lib/jsonrpc/jsonrpc_client.o 00:16:25.427 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:16:25.427 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:16:25.427 LIB libspdk_rdma_provider.a 00:16:25.427 SO libspdk_rdma_provider.so.7.0 00:16:25.686 SYMLINK libspdk_rdma_provider.so 00:16:25.686 LIB libspdk_jsonrpc.a 00:16:25.945 SO libspdk_jsonrpc.so.6.0 00:16:25.945 SYMLINK libspdk_jsonrpc.so 00:16:26.203 CC lib/rpc/rpc.o 00:16:26.203 LIB libspdk_env_dpdk.a 00:16:26.460 SO libspdk_env_dpdk.so.15.1 00:16:26.460 LIB libspdk_rpc.a 00:16:26.460 SO libspdk_rpc.so.6.0 00:16:26.460 SYMLINK libspdk_rpc.so 00:16:26.719 SYMLINK libspdk_env_dpdk.so 00:16:26.719 CC lib/notify/notify.o 00:16:26.719 CC lib/notify/notify_rpc.o 00:16:26.719 CC lib/trace/trace.o 00:16:26.719 CC lib/trace/trace_flags.o 00:16:26.719 CC lib/trace/trace_rpc.o 00:16:26.719 CC lib/keyring/keyring_rpc.o 00:16:26.719 CC lib/keyring/keyring.o 00:16:26.978 LIB libspdk_notify.a 00:16:26.978 SO libspdk_notify.so.6.0 00:16:26.978 SYMLINK libspdk_notify.so 00:16:27.242 LIB libspdk_trace.a 00:16:27.242 LIB libspdk_keyring.a 00:16:27.242 SO libspdk_trace.so.11.0 00:16:27.242 SO libspdk_keyring.so.2.0 00:16:27.242 SYMLINK libspdk_trace.so 00:16:27.242 SYMLINK libspdk_keyring.so 00:16:27.500 CC lib/sock/sock.o 00:16:27.500 CC lib/sock/sock_rpc.o 00:16:27.500 CC lib/thread/thread.o 00:16:27.500 CC lib/thread/iobuf.o 00:16:28.070 LIB libspdk_sock.a 00:16:28.328 SO libspdk_sock.so.10.0 00:16:28.328 SYMLINK libspdk_sock.so 00:16:28.587 CC lib/nvme/nvme_ctrlr_cmd.o 00:16:28.587 CC lib/nvme/nvme_ctrlr.o 00:16:28.587 CC lib/nvme/nvme_fabric.o 00:16:28.587 CC lib/nvme/nvme_ns_cmd.o 00:16:28.587 CC lib/nvme/nvme_ns.o 00:16:28.587 CC lib/nvme/nvme_pcie_common.o 00:16:28.587 CC lib/nvme/nvme_pcie.o 00:16:28.587 CC lib/nvme/nvme_qpair.o 00:16:28.587 CC lib/nvme/nvme.o 00:16:29.523 CC lib/nvme/nvme_quirks.o 00:16:29.523 CC lib/nvme/nvme_transport.o 00:16:29.781 CC lib/nvme/nvme_discovery.o 00:16:29.781 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:16:30.040 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:16:30.337 CC lib/nvme/nvme_tcp.o 00:16:30.337 CC lib/nvme/nvme_opal.o 00:16:30.337 CC lib/nvme/nvme_io_msg.o 00:16:30.337 CC lib/nvme/nvme_poll_group.o 00:16:30.595 CC lib/nvme/nvme_zns.o 00:16:30.595 CC lib/nvme/nvme_stubs.o 00:16:30.595 CC lib/nvme/nvme_auth.o 00:16:30.852 LIB libspdk_thread.a 00:16:31.111 SO libspdk_thread.so.11.0 00:16:31.111 CC lib/nvme/nvme_cuse.o 00:16:31.111 CC lib/nvme/nvme_rdma.o 00:16:31.111 SYMLINK libspdk_thread.so 00:16:31.370 CC lib/blob/blobstore.o 00:16:31.370 CC lib/accel/accel.o 00:16:31.370 CC lib/init/json_config.o 00:16:31.370 CC lib/virtio/virtio.o 00:16:31.629 CC lib/fsdev/fsdev.o 00:16:31.629 CC lib/init/subsystem.o 00:16:31.887 CC lib/virtio/virtio_vhost_user.o 00:16:31.887 CC lib/init/subsystem_rpc.o 00:16:32.146 CC lib/init/rpc.o 00:16:32.146 CC lib/virtio/virtio_vfio_user.o 00:16:32.146 CC lib/virtio/virtio_pci.o 00:16:32.404 LIB libspdk_init.a 00:16:32.404 CC lib/fsdev/fsdev_io.o 00:16:32.404 SO libspdk_init.so.6.0 00:16:32.404 SYMLINK libspdk_init.so 00:16:32.404 CC lib/fsdev/fsdev_rpc.o 00:16:32.404 CC lib/blob/request.o 00:16:32.662 CC lib/accel/accel_rpc.o 00:16:32.662 LIB libspdk_virtio.a 00:16:32.662 CC lib/event/app.o 00:16:32.662 CC lib/blob/zeroes.o 00:16:32.662 SO libspdk_virtio.so.7.0 00:16:32.662 SYMLINK libspdk_virtio.so 00:16:32.662 CC lib/blob/blob_bs_dev.o 00:16:32.921 LIB libspdk_fsdev.a 00:16:32.921 SO libspdk_fsdev.so.2.0 00:16:32.921 CC lib/accel/accel_sw.o 00:16:32.921 CC lib/event/reactor.o 00:16:32.921 CC lib/event/log_rpc.o 00:16:32.921 SYMLINK libspdk_fsdev.so 00:16:32.921 CC lib/event/app_rpc.o 00:16:33.179 CC lib/event/scheduler_static.o 00:16:33.179 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:16:33.179 LIB libspdk_accel.a 00:16:33.438 SO libspdk_accel.so.16.0 00:16:33.438 SYMLINK libspdk_accel.so 00:16:33.696 CC lib/bdev/bdev.o 00:16:33.696 CC lib/bdev/bdev_rpc.o 00:16:33.696 CC lib/bdev/bdev_zone.o 00:16:33.696 CC lib/bdev/part.o 00:16:33.696 CC lib/bdev/scsi_nvme.o 00:16:33.696 LIB libspdk_event.a 00:16:33.696 SO libspdk_event.so.14.0 00:16:33.954 SYMLINK libspdk_event.so 00:16:33.954 LIB libspdk_nvme.a 00:16:34.212 SO libspdk_nvme.so.15.0 00:16:34.212 LIB libspdk_fuse_dispatcher.a 00:16:34.471 SO libspdk_fuse_dispatcher.so.1.0 00:16:34.471 SYMLINK libspdk_fuse_dispatcher.so 00:16:34.471 SYMLINK libspdk_nvme.so 00:16:36.372 LIB libspdk_blob.a 00:16:36.372 SO libspdk_blob.so.11.0 00:16:36.372 SYMLINK libspdk_blob.so 00:16:36.629 CC lib/blobfs/tree.o 00:16:36.629 CC lib/blobfs/blobfs.o 00:16:36.629 CC lib/lvol/lvol.o 00:16:37.563 LIB libspdk_bdev.a 00:16:37.563 SO libspdk_bdev.so.17.0 00:16:37.820 SYMLINK libspdk_bdev.so 00:16:38.078 CC lib/ftl/ftl_core.o 00:16:38.078 CC lib/ftl/ftl_init.o 00:16:38.078 CC lib/ftl/ftl_layout.o 00:16:38.078 CC lib/ftl/ftl_debug.o 00:16:38.078 CC lib/ublk/ublk.o 00:16:38.078 CC lib/nbd/nbd.o 00:16:38.078 CC lib/scsi/dev.o 00:16:38.078 LIB libspdk_blobfs.a 00:16:38.078 CC lib/nvmf/ctrlr.o 00:16:38.078 SO libspdk_blobfs.so.10.0 00:16:38.078 LIB libspdk_lvol.a 00:16:38.078 SO libspdk_lvol.so.10.0 00:16:38.394 SYMLINK libspdk_blobfs.so 00:16:38.394 CC lib/nvmf/ctrlr_discovery.o 00:16:38.394 SYMLINK libspdk_lvol.so 00:16:38.394 CC lib/nvmf/ctrlr_bdev.o 00:16:38.394 CC lib/ftl/ftl_io.o 00:16:38.394 CC lib/nvmf/subsystem.o 00:16:38.704 CC lib/ftl/ftl_sb.o 00:16:38.704 CC lib/scsi/lun.o 00:16:38.704 CC lib/scsi/port.o 00:16:38.704 CC lib/ftl/ftl_l2p.o 00:16:38.704 CC lib/nvmf/nvmf.o 00:16:38.704 CC lib/nbd/nbd_rpc.o 00:16:38.962 CC lib/nvmf/nvmf_rpc.o 00:16:38.962 CC lib/ftl/ftl_l2p_flat.o 00:16:38.962 CC lib/ftl/ftl_nv_cache.o 00:16:38.962 CC lib/scsi/scsi.o 00:16:38.962 LIB libspdk_nbd.a 00:16:38.962 SO libspdk_nbd.so.7.0 00:16:39.264 SYMLINK libspdk_nbd.so 00:16:39.264 CC lib/ublk/ublk_rpc.o 00:16:39.264 CC lib/nvmf/transport.o 00:16:39.264 CC lib/scsi/scsi_bdev.o 00:16:39.264 CC lib/scsi/scsi_pr.o 00:16:39.264 CC lib/nvmf/tcp.o 00:16:39.264 LIB libspdk_ublk.a 00:16:39.264 SO libspdk_ublk.so.3.0 00:16:39.522 SYMLINK libspdk_ublk.so 00:16:39.522 CC lib/nvmf/stubs.o 00:16:39.522 CC lib/ftl/ftl_band.o 00:16:39.779 CC lib/scsi/scsi_rpc.o 00:16:40.036 CC lib/nvmf/mdns_server.o 00:16:40.036 CC lib/nvmf/rdma.o 00:16:40.036 CC lib/scsi/task.o 00:16:40.036 CC lib/nvmf/auth.o 00:16:40.036 CC lib/ftl/ftl_band_ops.o 00:16:40.036 CC lib/ftl/ftl_writer.o 00:16:40.293 LIB libspdk_scsi.a 00:16:40.293 CC lib/ftl/ftl_rq.o 00:16:40.293 CC lib/ftl/ftl_reloc.o 00:16:40.293 SO libspdk_scsi.so.9.0 00:16:40.551 SYMLINK libspdk_scsi.so 00:16:40.551 CC lib/ftl/ftl_l2p_cache.o 00:16:40.551 CC lib/ftl/ftl_p2l.o 00:16:40.551 CC lib/ftl/ftl_p2l_log.o 00:16:40.810 CC lib/ftl/mngt/ftl_mngt.o 00:16:40.810 CC lib/iscsi/conn.o 00:16:40.810 CC lib/iscsi/init_grp.o 00:16:40.810 CC lib/iscsi/iscsi.o 00:16:40.810 CC lib/vhost/vhost.o 00:16:41.069 CC lib/vhost/vhost_rpc.o 00:16:41.069 CC lib/vhost/vhost_scsi.o 00:16:41.327 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:16:41.327 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:16:41.327 CC lib/iscsi/param.o 00:16:41.327 CC lib/vhost/vhost_blk.o 00:16:41.585 CC lib/ftl/mngt/ftl_mngt_startup.o 00:16:41.585 CC lib/ftl/mngt/ftl_mngt_md.o 00:16:41.585 CC lib/vhost/rte_vhost_user.o 00:16:41.585 CC lib/iscsi/portal_grp.o 00:16:41.845 CC lib/iscsi/tgt_node.o 00:16:41.845 CC lib/iscsi/iscsi_subsystem.o 00:16:42.104 CC lib/ftl/mngt/ftl_mngt_misc.o 00:16:42.104 CC lib/iscsi/iscsi_rpc.o 00:16:42.104 CC lib/iscsi/task.o 00:16:42.362 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:16:42.362 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:16:42.362 CC lib/ftl/mngt/ftl_mngt_band.o 00:16:42.362 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:16:42.362 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:16:42.621 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:16:42.621 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:16:42.621 CC lib/ftl/utils/ftl_conf.o 00:16:42.621 CC lib/ftl/utils/ftl_md.o 00:16:42.621 CC lib/ftl/utils/ftl_mempool.o 00:16:42.621 CC lib/ftl/utils/ftl_bitmap.o 00:16:42.621 CC lib/ftl/utils/ftl_property.o 00:16:42.972 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:16:42.972 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:16:42.972 LIB libspdk_iscsi.a 00:16:42.972 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:16:42.972 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:16:42.972 SO libspdk_iscsi.so.8.0 00:16:42.972 LIB libspdk_vhost.a 00:16:42.972 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:16:42.972 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:16:42.972 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:16:42.972 SO libspdk_vhost.so.8.0 00:16:43.231 CC lib/ftl/upgrade/ftl_sb_v3.o 00:16:43.231 CC lib/ftl/upgrade/ftl_sb_v5.o 00:16:43.231 SYMLINK libspdk_iscsi.so 00:16:43.231 CC lib/ftl/nvc/ftl_nvc_dev.o 00:16:43.231 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:16:43.231 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:16:43.231 SYMLINK libspdk_vhost.so 00:16:43.231 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:16:43.231 CC lib/ftl/base/ftl_base_dev.o 00:16:43.231 CC lib/ftl/base/ftl_base_bdev.o 00:16:43.231 CC lib/ftl/ftl_trace.o 00:16:43.231 LIB libspdk_nvmf.a 00:16:43.490 SO libspdk_nvmf.so.20.0 00:16:43.490 LIB libspdk_ftl.a 00:16:43.748 SYMLINK libspdk_nvmf.so 00:16:44.006 SO libspdk_ftl.so.9.0 00:16:44.265 SYMLINK libspdk_ftl.so 00:16:44.832 CC module/env_dpdk/env_dpdk_rpc.o 00:16:44.832 CC module/accel/error/accel_error.o 00:16:44.832 CC module/accel/dsa/accel_dsa.o 00:16:44.832 CC module/accel/ioat/accel_ioat.o 00:16:44.832 CC module/keyring/file/keyring.o 00:16:44.832 CC module/fsdev/aio/fsdev_aio.o 00:16:44.832 CC module/scheduler/dynamic/scheduler_dynamic.o 00:16:44.832 CC module/sock/posix/posix.o 00:16:44.832 CC module/blob/bdev/blob_bdev.o 00:16:44.832 CC module/accel/iaa/accel_iaa.o 00:16:44.832 LIB libspdk_env_dpdk_rpc.a 00:16:44.832 SO libspdk_env_dpdk_rpc.so.6.0 00:16:45.091 SYMLINK libspdk_env_dpdk_rpc.so 00:16:45.091 CC module/keyring/file/keyring_rpc.o 00:16:45.091 CC module/accel/ioat/accel_ioat_rpc.o 00:16:45.091 LIB libspdk_scheduler_dynamic.a 00:16:45.091 CC module/accel/iaa/accel_iaa_rpc.o 00:16:45.091 CC module/accel/error/accel_error_rpc.o 00:16:45.091 SO libspdk_scheduler_dynamic.so.4.0 00:16:45.091 LIB libspdk_keyring_file.a 00:16:45.091 CC module/accel/dsa/accel_dsa_rpc.o 00:16:45.091 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:16:45.091 SYMLINK libspdk_scheduler_dynamic.so 00:16:45.091 SO libspdk_keyring_file.so.2.0 00:16:45.091 LIB libspdk_accel_iaa.a 00:16:45.091 LIB libspdk_accel_ioat.a 00:16:45.349 LIB libspdk_blob_bdev.a 00:16:45.349 SO libspdk_accel_iaa.so.3.0 00:16:45.349 SO libspdk_accel_ioat.so.6.0 00:16:45.349 SYMLINK libspdk_keyring_file.so 00:16:45.349 SO libspdk_blob_bdev.so.11.0 00:16:45.349 LIB libspdk_accel_error.a 00:16:45.349 LIB libspdk_accel_dsa.a 00:16:45.349 SO libspdk_accel_error.so.2.0 00:16:45.349 SYMLINK libspdk_accel_ioat.so 00:16:45.349 SYMLINK libspdk_blob_bdev.so 00:16:45.349 CC module/fsdev/aio/fsdev_aio_rpc.o 00:16:45.349 SYMLINK libspdk_accel_iaa.so 00:16:45.349 CC module/fsdev/aio/linux_aio_mgr.o 00:16:45.349 LIB libspdk_scheduler_dpdk_governor.a 00:16:45.349 SO libspdk_accel_dsa.so.5.0 00:16:45.349 SYMLINK libspdk_accel_error.so 00:16:45.349 SO libspdk_scheduler_dpdk_governor.so.4.0 00:16:45.349 CC module/scheduler/gscheduler/gscheduler.o 00:16:45.349 SYMLINK libspdk_accel_dsa.so 00:16:45.607 SYMLINK libspdk_scheduler_dpdk_governor.so 00:16:45.607 CC module/keyring/linux/keyring.o 00:16:45.607 CC module/keyring/linux/keyring_rpc.o 00:16:45.607 LIB libspdk_scheduler_gscheduler.a 00:16:45.607 SO libspdk_scheduler_gscheduler.so.4.0 00:16:45.607 CC module/bdev/error/vbdev_error.o 00:16:45.607 CC module/bdev/gpt/gpt.o 00:16:45.607 CC module/bdev/delay/vbdev_delay.o 00:16:45.607 SYMLINK libspdk_scheduler_gscheduler.so 00:16:45.607 CC module/bdev/delay/vbdev_delay_rpc.o 00:16:45.607 LIB libspdk_keyring_linux.a 00:16:45.867 CC module/blobfs/bdev/blobfs_bdev.o 00:16:45.867 SO libspdk_keyring_linux.so.1.0 00:16:45.867 LIB libspdk_sock_posix.a 00:16:45.867 LIB libspdk_fsdev_aio.a 00:16:45.867 CC module/bdev/lvol/vbdev_lvol.o 00:16:45.867 SO libspdk_sock_posix.so.6.0 00:16:45.867 SO libspdk_fsdev_aio.so.1.0 00:16:45.867 SYMLINK libspdk_keyring_linux.so 00:16:45.867 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:16:45.867 CC module/bdev/malloc/bdev_malloc.o 00:16:45.867 SYMLINK libspdk_sock_posix.so 00:16:45.867 CC module/bdev/malloc/bdev_malloc_rpc.o 00:16:45.867 CC module/bdev/gpt/vbdev_gpt.o 00:16:46.126 SYMLINK libspdk_fsdev_aio.so 00:16:46.126 CC module/bdev/error/vbdev_error_rpc.o 00:16:46.126 LIB libspdk_blobfs_bdev.a 00:16:46.126 SO libspdk_blobfs_bdev.so.6.0 00:16:46.126 SYMLINK libspdk_blobfs_bdev.so 00:16:46.126 CC module/bdev/null/bdev_null.o 00:16:46.384 CC module/bdev/nvme/bdev_nvme.o 00:16:46.384 CC module/bdev/passthru/vbdev_passthru.o 00:16:46.384 LIB libspdk_bdev_gpt.a 00:16:46.384 LIB libspdk_bdev_error.a 00:16:46.384 SO libspdk_bdev_gpt.so.6.0 00:16:46.384 LIB libspdk_bdev_malloc.a 00:16:46.384 SO libspdk_bdev_error.so.6.0 00:16:46.384 LIB libspdk_bdev_delay.a 00:16:46.384 SO libspdk_bdev_malloc.so.6.0 00:16:46.384 SO libspdk_bdev_delay.so.6.0 00:16:46.384 SYMLINK libspdk_bdev_gpt.so 00:16:46.384 SYMLINK libspdk_bdev_error.so 00:16:46.384 CC module/bdev/raid/bdev_raid.o 00:16:46.384 CC module/bdev/null/bdev_null_rpc.o 00:16:46.384 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:16:46.642 CC module/bdev/split/vbdev_split.o 00:16:46.642 SYMLINK libspdk_bdev_malloc.so 00:16:46.642 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:16:46.642 SYMLINK libspdk_bdev_delay.so 00:16:46.642 CC module/bdev/split/vbdev_split_rpc.o 00:16:46.642 CC module/bdev/zone_block/vbdev_zone_block.o 00:16:46.900 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:16:46.900 LIB libspdk_bdev_null.a 00:16:46.900 CC module/bdev/aio/bdev_aio.o 00:16:46.900 SO libspdk_bdev_null.so.6.0 00:16:46.900 LIB libspdk_bdev_passthru.a 00:16:46.900 SO libspdk_bdev_passthru.so.6.0 00:16:46.900 SYMLINK libspdk_bdev_null.so 00:16:46.900 LIB libspdk_bdev_split.a 00:16:46.900 LIB libspdk_bdev_lvol.a 00:16:46.900 SO libspdk_bdev_lvol.so.6.0 00:16:46.900 SO libspdk_bdev_split.so.6.0 00:16:47.159 CC module/bdev/raid/bdev_raid_rpc.o 00:16:47.159 SYMLINK libspdk_bdev_passthru.so 00:16:47.159 CC module/bdev/raid/bdev_raid_sb.o 00:16:47.159 SYMLINK libspdk_bdev_lvol.so 00:16:47.159 SYMLINK libspdk_bdev_split.so 00:16:47.159 CC module/bdev/raid/raid0.o 00:16:47.159 CC module/bdev/raid/raid1.o 00:16:47.159 LIB libspdk_bdev_zone_block.a 00:16:47.159 CC module/bdev/ftl/bdev_ftl.o 00:16:47.159 CC module/bdev/iscsi/bdev_iscsi.o 00:16:47.159 SO libspdk_bdev_zone_block.so.6.0 00:16:47.418 SYMLINK libspdk_bdev_zone_block.so 00:16:47.418 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:16:47.418 CC module/bdev/raid/concat.o 00:16:47.418 CC module/bdev/aio/bdev_aio_rpc.o 00:16:47.418 CC module/bdev/raid/raid5f.o 00:16:47.418 CC module/bdev/ftl/bdev_ftl_rpc.o 00:16:47.418 CC module/bdev/virtio/bdev_virtio_scsi.o 00:16:47.677 CC module/bdev/nvme/bdev_nvme_rpc.o 00:16:47.677 LIB libspdk_bdev_aio.a 00:16:47.677 SO libspdk_bdev_aio.so.6.0 00:16:47.677 CC module/bdev/virtio/bdev_virtio_blk.o 00:16:47.677 SYMLINK libspdk_bdev_aio.so 00:16:47.677 CC module/bdev/nvme/nvme_rpc.o 00:16:47.677 CC module/bdev/nvme/bdev_mdns_client.o 00:16:47.992 LIB libspdk_bdev_ftl.a 00:16:47.992 SO libspdk_bdev_ftl.so.6.0 00:16:47.992 LIB libspdk_bdev_iscsi.a 00:16:47.992 CC module/bdev/nvme/vbdev_opal.o 00:16:47.992 CC module/bdev/nvme/vbdev_opal_rpc.o 00:16:47.992 CC module/bdev/virtio/bdev_virtio_rpc.o 00:16:47.992 SO libspdk_bdev_iscsi.so.6.0 00:16:47.992 SYMLINK libspdk_bdev_ftl.so 00:16:47.992 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:16:47.992 SYMLINK libspdk_bdev_iscsi.so 00:16:48.250 LIB libspdk_bdev_virtio.a 00:16:48.508 LIB libspdk_bdev_raid.a 00:16:48.508 SO libspdk_bdev_virtio.so.6.0 00:16:48.508 SO libspdk_bdev_raid.so.6.0 00:16:48.509 SYMLINK libspdk_bdev_virtio.so 00:16:48.509 SYMLINK libspdk_bdev_raid.so 00:16:50.409 LIB libspdk_bdev_nvme.a 00:16:50.409 SO libspdk_bdev_nvme.so.7.1 00:16:50.409 SYMLINK libspdk_bdev_nvme.so 00:16:50.975 CC module/event/subsystems/keyring/keyring.o 00:16:50.975 CC module/event/subsystems/iobuf/iobuf.o 00:16:50.975 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:16:50.975 CC module/event/subsystems/scheduler/scheduler.o 00:16:50.975 CC module/event/subsystems/vmd/vmd_rpc.o 00:16:50.975 CC module/event/subsystems/vmd/vmd.o 00:16:50.975 CC module/event/subsystems/fsdev/fsdev.o 00:16:50.975 CC module/event/subsystems/sock/sock.o 00:16:50.975 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:16:51.233 LIB libspdk_event_keyring.a 00:16:51.233 SO libspdk_event_keyring.so.1.0 00:16:51.233 LIB libspdk_event_vhost_blk.a 00:16:51.233 LIB libspdk_event_fsdev.a 00:16:51.233 LIB libspdk_event_vmd.a 00:16:51.233 LIB libspdk_event_iobuf.a 00:16:51.233 LIB libspdk_event_scheduler.a 00:16:51.233 SYMLINK libspdk_event_keyring.so 00:16:51.233 SO libspdk_event_vhost_blk.so.3.0 00:16:51.233 SO libspdk_event_fsdev.so.1.0 00:16:51.233 LIB libspdk_event_sock.a 00:16:51.233 SO libspdk_event_scheduler.so.4.0 00:16:51.233 SO libspdk_event_iobuf.so.3.0 00:16:51.233 SO libspdk_event_vmd.so.6.0 00:16:51.233 SO libspdk_event_sock.so.5.0 00:16:51.233 SYMLINK libspdk_event_fsdev.so 00:16:51.233 SYMLINK libspdk_event_vhost_blk.so 00:16:51.233 SYMLINK libspdk_event_iobuf.so 00:16:51.491 SYMLINK libspdk_event_scheduler.so 00:16:51.491 SYMLINK libspdk_event_vmd.so 00:16:51.491 SYMLINK libspdk_event_sock.so 00:16:51.491 CC module/event/subsystems/accel/accel.o 00:16:51.748 LIB libspdk_event_accel.a 00:16:51.748 SO libspdk_event_accel.so.6.0 00:16:52.007 SYMLINK libspdk_event_accel.so 00:16:52.265 CC module/event/subsystems/bdev/bdev.o 00:16:52.523 LIB libspdk_event_bdev.a 00:16:52.523 SO libspdk_event_bdev.so.6.0 00:16:52.523 SYMLINK libspdk_event_bdev.so 00:16:52.781 CC module/event/subsystems/ublk/ublk.o 00:16:52.781 CC module/event/subsystems/scsi/scsi.o 00:16:52.781 CC module/event/subsystems/nbd/nbd.o 00:16:52.781 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:16:52.781 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:16:53.039 LIB libspdk_event_ublk.a 00:16:53.039 LIB libspdk_event_scsi.a 00:16:53.039 SO libspdk_event_ublk.so.3.0 00:16:53.039 SO libspdk_event_scsi.so.6.0 00:16:53.039 LIB libspdk_event_nbd.a 00:16:53.039 SYMLINK libspdk_event_ublk.so 00:16:53.039 SO libspdk_event_nbd.so.6.0 00:16:53.039 SYMLINK libspdk_event_scsi.so 00:16:53.039 SYMLINK libspdk_event_nbd.so 00:16:53.039 LIB libspdk_event_nvmf.a 00:16:53.296 SO libspdk_event_nvmf.so.6.0 00:16:53.296 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:16:53.296 CC module/event/subsystems/iscsi/iscsi.o 00:16:53.296 SYMLINK libspdk_event_nvmf.so 00:16:53.619 LIB libspdk_event_vhost_scsi.a 00:16:53.619 SO libspdk_event_vhost_scsi.so.3.0 00:16:53.619 LIB libspdk_event_iscsi.a 00:16:53.619 SO libspdk_event_iscsi.so.6.0 00:16:53.619 SYMLINK libspdk_event_vhost_scsi.so 00:16:53.619 SYMLINK libspdk_event_iscsi.so 00:16:53.619 SO libspdk.so.6.0 00:16:53.619 SYMLINK libspdk.so 00:16:53.878 CC test/rpc_client/rpc_client_test.o 00:16:53.878 CXX app/trace/trace.o 00:16:53.878 TEST_HEADER include/spdk/accel.h 00:16:53.878 TEST_HEADER include/spdk/accel_module.h 00:16:53.878 TEST_HEADER include/spdk/assert.h 00:16:53.878 TEST_HEADER include/spdk/barrier.h 00:16:53.878 TEST_HEADER include/spdk/base64.h 00:16:53.878 TEST_HEADER include/spdk/bdev.h 00:16:53.878 TEST_HEADER include/spdk/bdev_module.h 00:16:53.878 CC examples/interrupt_tgt/interrupt_tgt.o 00:16:53.878 TEST_HEADER include/spdk/bdev_zone.h 00:16:53.878 TEST_HEADER include/spdk/bit_array.h 00:16:53.878 TEST_HEADER include/spdk/bit_pool.h 00:16:53.878 TEST_HEADER include/spdk/blob_bdev.h 00:16:53.878 TEST_HEADER include/spdk/blobfs_bdev.h 00:16:53.878 TEST_HEADER include/spdk/blobfs.h 00:16:53.878 TEST_HEADER include/spdk/blob.h 00:16:53.878 TEST_HEADER include/spdk/conf.h 00:16:53.878 TEST_HEADER include/spdk/config.h 00:16:53.878 TEST_HEADER include/spdk/cpuset.h 00:16:53.878 TEST_HEADER include/spdk/crc16.h 00:16:53.878 TEST_HEADER include/spdk/crc32.h 00:16:53.878 TEST_HEADER include/spdk/crc64.h 00:16:53.878 TEST_HEADER include/spdk/dif.h 00:16:53.878 TEST_HEADER include/spdk/dma.h 00:16:53.878 TEST_HEADER include/spdk/endian.h 00:16:53.878 TEST_HEADER include/spdk/env_dpdk.h 00:16:53.878 TEST_HEADER include/spdk/env.h 00:16:53.878 TEST_HEADER include/spdk/event.h 00:16:53.878 TEST_HEADER include/spdk/fd_group.h 00:16:53.878 TEST_HEADER include/spdk/fd.h 00:16:53.878 TEST_HEADER include/spdk/file.h 00:16:54.137 TEST_HEADER include/spdk/fsdev.h 00:16:54.137 TEST_HEADER include/spdk/fsdev_module.h 00:16:54.137 TEST_HEADER include/spdk/ftl.h 00:16:54.137 CC examples/util/zipf/zipf.o 00:16:54.137 TEST_HEADER include/spdk/fuse_dispatcher.h 00:16:54.137 TEST_HEADER include/spdk/gpt_spec.h 00:16:54.137 CC examples/ioat/perf/perf.o 00:16:54.137 TEST_HEADER include/spdk/hexlify.h 00:16:54.137 TEST_HEADER include/spdk/histogram_data.h 00:16:54.137 TEST_HEADER include/spdk/idxd.h 00:16:54.137 TEST_HEADER include/spdk/idxd_spec.h 00:16:54.137 TEST_HEADER include/spdk/init.h 00:16:54.137 TEST_HEADER include/spdk/ioat.h 00:16:54.137 TEST_HEADER include/spdk/ioat_spec.h 00:16:54.137 TEST_HEADER include/spdk/iscsi_spec.h 00:16:54.137 CC test/thread/poller_perf/poller_perf.o 00:16:54.137 TEST_HEADER include/spdk/json.h 00:16:54.137 TEST_HEADER include/spdk/jsonrpc.h 00:16:54.137 TEST_HEADER include/spdk/keyring.h 00:16:54.137 TEST_HEADER include/spdk/keyring_module.h 00:16:54.137 TEST_HEADER include/spdk/likely.h 00:16:54.137 TEST_HEADER include/spdk/log.h 00:16:54.137 TEST_HEADER include/spdk/lvol.h 00:16:54.137 TEST_HEADER include/spdk/md5.h 00:16:54.137 CC test/dma/test_dma/test_dma.o 00:16:54.137 TEST_HEADER include/spdk/memory.h 00:16:54.137 TEST_HEADER include/spdk/mmio.h 00:16:54.137 TEST_HEADER include/spdk/nbd.h 00:16:54.137 TEST_HEADER include/spdk/net.h 00:16:54.137 TEST_HEADER include/spdk/notify.h 00:16:54.137 TEST_HEADER include/spdk/nvme.h 00:16:54.137 TEST_HEADER include/spdk/nvme_intel.h 00:16:54.137 TEST_HEADER include/spdk/nvme_ocssd.h 00:16:54.137 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:16:54.137 TEST_HEADER include/spdk/nvme_spec.h 00:16:54.137 TEST_HEADER include/spdk/nvme_zns.h 00:16:54.137 CC test/app/bdev_svc/bdev_svc.o 00:16:54.137 TEST_HEADER include/spdk/nvmf_cmd.h 00:16:54.137 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:16:54.137 TEST_HEADER include/spdk/nvmf.h 00:16:54.137 TEST_HEADER include/spdk/nvmf_spec.h 00:16:54.137 TEST_HEADER include/spdk/nvmf_transport.h 00:16:54.137 TEST_HEADER include/spdk/opal.h 00:16:54.137 TEST_HEADER include/spdk/opal_spec.h 00:16:54.137 TEST_HEADER include/spdk/pci_ids.h 00:16:54.137 TEST_HEADER include/spdk/pipe.h 00:16:54.137 TEST_HEADER include/spdk/queue.h 00:16:54.137 TEST_HEADER include/spdk/reduce.h 00:16:54.137 TEST_HEADER include/spdk/rpc.h 00:16:54.137 TEST_HEADER include/spdk/scheduler.h 00:16:54.137 TEST_HEADER include/spdk/scsi.h 00:16:54.137 TEST_HEADER include/spdk/scsi_spec.h 00:16:54.137 TEST_HEADER include/spdk/sock.h 00:16:54.137 TEST_HEADER include/spdk/stdinc.h 00:16:54.137 TEST_HEADER include/spdk/string.h 00:16:54.137 TEST_HEADER include/spdk/thread.h 00:16:54.137 TEST_HEADER include/spdk/trace.h 00:16:54.137 TEST_HEADER include/spdk/trace_parser.h 00:16:54.137 TEST_HEADER include/spdk/tree.h 00:16:54.137 TEST_HEADER include/spdk/ublk.h 00:16:54.137 TEST_HEADER include/spdk/util.h 00:16:54.137 CC test/env/mem_callbacks/mem_callbacks.o 00:16:54.137 LINK rpc_client_test 00:16:54.137 TEST_HEADER include/spdk/uuid.h 00:16:54.137 TEST_HEADER include/spdk/version.h 00:16:54.137 TEST_HEADER include/spdk/vfio_user_pci.h 00:16:54.137 TEST_HEADER include/spdk/vfio_user_spec.h 00:16:54.137 TEST_HEADER include/spdk/vhost.h 00:16:54.137 TEST_HEADER include/spdk/vmd.h 00:16:54.137 TEST_HEADER include/spdk/xor.h 00:16:54.137 TEST_HEADER include/spdk/zipf.h 00:16:54.137 CXX test/cpp_headers/accel.o 00:16:54.137 LINK interrupt_tgt 00:16:54.137 LINK poller_perf 00:16:54.137 LINK zipf 00:16:54.396 LINK bdev_svc 00:16:54.396 CXX test/cpp_headers/accel_module.o 00:16:54.396 LINK ioat_perf 00:16:54.396 LINK spdk_trace 00:16:54.396 CC test/env/vtophys/vtophys.o 00:16:54.396 CC test/env/memory/memory_ut.o 00:16:54.396 CXX test/cpp_headers/assert.o 00:16:54.396 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:16:54.655 CC test/event/event_perf/event_perf.o 00:16:54.655 LINK test_dma 00:16:54.655 CC examples/ioat/verify/verify.o 00:16:54.655 LINK vtophys 00:16:54.655 CC app/trace_record/trace_record.o 00:16:54.655 CXX test/cpp_headers/barrier.o 00:16:54.655 LINK env_dpdk_post_init 00:16:54.913 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:16:54.913 LINK mem_callbacks 00:16:54.913 LINK event_perf 00:16:54.913 CXX test/cpp_headers/base64.o 00:16:54.913 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:16:54.913 LINK verify 00:16:54.913 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:16:55.170 LINK spdk_trace_record 00:16:55.170 CXX test/cpp_headers/bdev.o 00:16:55.170 CC test/event/reactor/reactor.o 00:16:55.170 CC test/accel/dif/dif.o 00:16:55.170 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:16:55.170 CC examples/thread/thread/thread_ex.o 00:16:55.170 CC test/app/histogram_perf/histogram_perf.o 00:16:55.170 LINK reactor 00:16:55.428 CXX test/cpp_headers/bdev_module.o 00:16:55.428 CC app/nvmf_tgt/nvmf_main.o 00:16:55.428 LINK nvme_fuzz 00:16:55.428 LINK histogram_perf 00:16:55.428 CC test/event/reactor_perf/reactor_perf.o 00:16:55.428 CXX test/cpp_headers/bdev_zone.o 00:16:55.688 LINK nvmf_tgt 00:16:55.688 LINK thread 00:16:55.688 CXX test/cpp_headers/bit_array.o 00:16:55.688 CC test/app/jsoncat/jsoncat.o 00:16:55.688 LINK reactor_perf 00:16:55.688 LINK vhost_fuzz 00:16:55.688 CXX test/cpp_headers/bit_pool.o 00:16:55.953 LINK jsoncat 00:16:55.953 CXX test/cpp_headers/blob_bdev.o 00:16:55.953 CC examples/sock/hello_world/hello_sock.o 00:16:55.953 CC app/iscsi_tgt/iscsi_tgt.o 00:16:55.953 CC test/app/stub/stub.o 00:16:55.953 CC test/event/app_repeat/app_repeat.o 00:16:55.953 LINK memory_ut 00:16:55.953 CXX test/cpp_headers/blobfs_bdev.o 00:16:56.212 CC app/spdk_tgt/spdk_tgt.o 00:16:56.212 LINK app_repeat 00:16:56.212 LINK dif 00:16:56.212 CC examples/vmd/lsvmd/lsvmd.o 00:16:56.212 LINK iscsi_tgt 00:16:56.212 LINK stub 00:16:56.212 CXX test/cpp_headers/blobfs.o 00:16:56.212 CC test/env/pci/pci_ut.o 00:16:56.212 LINK lsvmd 00:16:56.470 LINK hello_sock 00:16:56.470 LINK spdk_tgt 00:16:56.470 CC examples/vmd/led/led.o 00:16:56.470 CXX test/cpp_headers/blob.o 00:16:56.470 CC test/event/scheduler/scheduler.o 00:16:56.470 CC app/spdk_lspci/spdk_lspci.o 00:16:56.470 CXX test/cpp_headers/conf.o 00:16:56.728 CC test/blobfs/mkfs/mkfs.o 00:16:56.728 LINK led 00:16:56.728 LINK spdk_lspci 00:16:56.728 CC examples/idxd/perf/perf.o 00:16:56.728 CXX test/cpp_headers/config.o 00:16:56.728 LINK scheduler 00:16:56.728 CC app/spdk_nvme_perf/perf.o 00:16:56.728 CXX test/cpp_headers/cpuset.o 00:16:56.728 LINK pci_ut 00:16:56.987 LINK mkfs 00:16:56.987 CC app/spdk_nvme_identify/identify.o 00:16:56.987 CXX test/cpp_headers/crc16.o 00:16:56.987 CC test/lvol/esnap/esnap.o 00:16:56.987 CC app/spdk_nvme_discover/discovery_aer.o 00:16:56.987 CC examples/fsdev/hello_world/hello_fsdev.o 00:16:57.246 LINK idxd_perf 00:16:57.246 CXX test/cpp_headers/crc32.o 00:16:57.246 CXX test/cpp_headers/crc64.o 00:16:57.246 LINK iscsi_fuzz 00:16:57.246 CC test/nvme/aer/aer.o 00:16:57.504 LINK spdk_nvme_discover 00:16:57.504 CXX test/cpp_headers/dif.o 00:16:57.504 CC test/nvme/reset/reset.o 00:16:57.504 LINK hello_fsdev 00:16:57.504 CC test/nvme/sgl/sgl.o 00:16:57.504 CXX test/cpp_headers/dma.o 00:16:57.762 LINK aer 00:16:57.762 CC test/nvme/e2edp/nvme_dp.o 00:16:57.762 CXX test/cpp_headers/endian.o 00:16:57.762 CC test/nvme/overhead/overhead.o 00:16:57.762 LINK reset 00:16:57.762 LINK sgl 00:16:57.762 CC examples/accel/perf/accel_perf.o 00:16:57.762 LINK spdk_nvme_perf 00:16:58.019 CXX test/cpp_headers/env_dpdk.o 00:16:58.019 CC test/nvme/err_injection/err_injection.o 00:16:58.019 CC test/nvme/startup/startup.o 00:16:58.019 LINK nvme_dp 00:16:58.019 CXX test/cpp_headers/env.o 00:16:58.019 LINK spdk_nvme_identify 00:16:58.019 LINK overhead 00:16:58.277 CC test/nvme/reserve/reserve.o 00:16:58.277 CC test/nvme/simple_copy/simple_copy.o 00:16:58.277 LINK err_injection 00:16:58.277 LINK startup 00:16:58.277 CXX test/cpp_headers/event.o 00:16:58.277 CC test/nvme/connect_stress/connect_stress.o 00:16:58.542 CC app/spdk_top/spdk_top.o 00:16:58.542 LINK reserve 00:16:58.542 CXX test/cpp_headers/fd_group.o 00:16:58.542 LINK simple_copy 00:16:58.542 LINK connect_stress 00:16:58.542 CC examples/blob/hello_world/hello_blob.o 00:16:58.542 LINK accel_perf 00:16:58.542 CC examples/nvme/hello_world/hello_world.o 00:16:58.799 CXX test/cpp_headers/fd.o 00:16:58.799 CC test/bdev/bdevio/bdevio.o 00:16:58.799 CC test/nvme/boot_partition/boot_partition.o 00:16:58.799 CC test/nvme/compliance/nvme_compliance.o 00:16:58.799 CXX test/cpp_headers/file.o 00:16:58.799 CC test/nvme/fused_ordering/fused_ordering.o 00:16:58.799 LINK hello_world 00:16:58.799 LINK hello_blob 00:16:59.058 CC test/nvme/doorbell_aers/doorbell_aers.o 00:16:59.058 LINK boot_partition 00:16:59.058 CXX test/cpp_headers/fsdev.o 00:16:59.058 LINK fused_ordering 00:16:59.058 CC examples/nvme/reconnect/reconnect.o 00:16:59.058 LINK doorbell_aers 00:16:59.331 LINK bdevio 00:16:59.331 LINK nvme_compliance 00:16:59.331 CXX test/cpp_headers/fsdev_module.o 00:16:59.331 CC examples/blob/cli/blobcli.o 00:16:59.331 CC examples/bdev/hello_world/hello_bdev.o 00:16:59.331 CC examples/nvme/nvme_manage/nvme_manage.o 00:16:59.331 CC examples/nvme/arbitration/arbitration.o 00:16:59.589 CXX test/cpp_headers/ftl.o 00:16:59.589 CC test/nvme/fdp/fdp.o 00:16:59.589 LINK spdk_top 00:16:59.589 CC examples/bdev/bdevperf/bdevperf.o 00:16:59.589 LINK reconnect 00:16:59.589 LINK hello_bdev 00:16:59.589 CXX test/cpp_headers/fuse_dispatcher.o 00:16:59.848 LINK arbitration 00:16:59.848 CXX test/cpp_headers/gpt_spec.o 00:16:59.848 LINK blobcli 00:16:59.848 CC app/spdk_dd/spdk_dd.o 00:16:59.848 CC app/vhost/vhost.o 00:16:59.848 LINK fdp 00:17:00.107 CC app/fio/nvme/fio_plugin.o 00:17:00.107 LINK nvme_manage 00:17:00.107 CXX test/cpp_headers/hexlify.o 00:17:00.107 CC examples/nvme/hotplug/hotplug.o 00:17:00.107 LINK vhost 00:17:00.107 CC examples/nvme/cmb_copy/cmb_copy.o 00:17:00.107 CC test/nvme/cuse/cuse.o 00:17:00.365 CXX test/cpp_headers/histogram_data.o 00:17:00.365 CC examples/nvme/abort/abort.o 00:17:00.365 CXX test/cpp_headers/idxd.o 00:17:00.365 LINK spdk_dd 00:17:00.365 LINK cmb_copy 00:17:00.623 CXX test/cpp_headers/idxd_spec.o 00:17:00.623 CXX test/cpp_headers/init.o 00:17:00.623 LINK hotplug 00:17:00.623 CC app/fio/bdev/fio_plugin.o 00:17:00.623 LINK bdevperf 00:17:00.623 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:17:00.623 CXX test/cpp_headers/ioat.o 00:17:00.881 CXX test/cpp_headers/ioat_spec.o 00:17:00.881 CXX test/cpp_headers/iscsi_spec.o 00:17:00.881 LINK abort 00:17:00.881 LINK spdk_nvme 00:17:00.881 CXX test/cpp_headers/json.o 00:17:00.881 LINK pmr_persistence 00:17:00.881 CXX test/cpp_headers/jsonrpc.o 00:17:00.881 CXX test/cpp_headers/keyring.o 00:17:00.881 CXX test/cpp_headers/keyring_module.o 00:17:00.881 CXX test/cpp_headers/likely.o 00:17:00.881 CXX test/cpp_headers/log.o 00:17:01.140 CXX test/cpp_headers/lvol.o 00:17:01.140 CXX test/cpp_headers/md5.o 00:17:01.140 CXX test/cpp_headers/memory.o 00:17:01.140 CXX test/cpp_headers/mmio.o 00:17:01.140 CXX test/cpp_headers/nbd.o 00:17:01.140 CXX test/cpp_headers/net.o 00:17:01.140 CXX test/cpp_headers/notify.o 00:17:01.140 CXX test/cpp_headers/nvme.o 00:17:01.399 CXX test/cpp_headers/nvme_intel.o 00:17:01.399 LINK spdk_bdev 00:17:01.399 CXX test/cpp_headers/nvme_ocssd.o 00:17:01.399 CC examples/nvmf/nvmf/nvmf.o 00:17:01.399 CXX test/cpp_headers/nvme_ocssd_spec.o 00:17:01.399 CXX test/cpp_headers/nvme_spec.o 00:17:01.399 CXX test/cpp_headers/nvme_zns.o 00:17:01.399 CXX test/cpp_headers/nvmf_cmd.o 00:17:01.399 CXX test/cpp_headers/nvmf_fc_spec.o 00:17:01.657 CXX test/cpp_headers/nvmf.o 00:17:01.657 CXX test/cpp_headers/nvmf_spec.o 00:17:01.657 CXX test/cpp_headers/nvmf_transport.o 00:17:01.657 CXX test/cpp_headers/opal.o 00:17:01.657 CXX test/cpp_headers/opal_spec.o 00:17:01.657 LINK nvmf 00:17:01.916 CXX test/cpp_headers/pci_ids.o 00:17:01.916 CXX test/cpp_headers/pipe.o 00:17:01.916 CXX test/cpp_headers/queue.o 00:17:01.916 CXX test/cpp_headers/reduce.o 00:17:01.916 CXX test/cpp_headers/rpc.o 00:17:01.916 CXX test/cpp_headers/scheduler.o 00:17:01.916 CXX test/cpp_headers/scsi.o 00:17:01.916 CXX test/cpp_headers/scsi_spec.o 00:17:01.916 CXX test/cpp_headers/sock.o 00:17:01.916 CXX test/cpp_headers/stdinc.o 00:17:01.916 CXX test/cpp_headers/string.o 00:17:02.174 CXX test/cpp_headers/thread.o 00:17:02.174 CXX test/cpp_headers/trace.o 00:17:02.174 CXX test/cpp_headers/trace_parser.o 00:17:02.174 CXX test/cpp_headers/tree.o 00:17:02.174 CXX test/cpp_headers/ublk.o 00:17:02.174 CXX test/cpp_headers/util.o 00:17:02.174 CXX test/cpp_headers/uuid.o 00:17:02.174 CXX test/cpp_headers/version.o 00:17:02.174 LINK cuse 00:17:02.174 CXX test/cpp_headers/vfio_user_pci.o 00:17:02.174 CXX test/cpp_headers/vfio_user_spec.o 00:17:02.174 CXX test/cpp_headers/vhost.o 00:17:02.432 CXX test/cpp_headers/vmd.o 00:17:02.432 CXX test/cpp_headers/xor.o 00:17:02.432 CXX test/cpp_headers/zipf.o 00:17:04.967 LINK esnap 00:17:04.967 00:17:04.967 real 1m43.394s 00:17:04.967 user 9m40.605s 00:17:04.967 sys 1m52.680s 00:17:04.967 13:37:07 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:17:04.967 ************************************ 00:17:04.967 13:37:07 make -- common/autotest_common.sh@10 -- $ set +x 00:17:04.967 END TEST make 00:17:04.967 ************************************ 00:17:05.226 13:37:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:17:05.226 13:37:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:17:05.226 13:37:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:17:05.226 13:37:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:05.226 13:37:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:17:05.226 13:37:07 -- pm/common@44 -- $ pid=5416 00:17:05.226 13:37:07 -- pm/common@50 -- $ kill -TERM 5416 00:17:05.226 13:37:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:05.226 13:37:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:17:05.226 13:37:07 -- pm/common@44 -- $ pid=5417 00:17:05.226 13:37:07 -- pm/common@50 -- $ kill -TERM 5417 00:17:05.226 13:37:07 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:17:05.226 13:37:07 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:17:05.226 13:37:07 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:05.226 13:37:07 -- common/autotest_common.sh@1693 -- # lcov --version 00:17:05.226 13:37:07 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:05.226 13:37:08 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:05.226 13:37:08 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:05.226 13:37:08 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:05.226 13:37:08 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:05.226 13:37:08 -- scripts/common.sh@336 -- # IFS=.-: 00:17:05.226 13:37:08 -- scripts/common.sh@336 -- # read -ra ver1 00:17:05.226 13:37:08 -- scripts/common.sh@337 -- # IFS=.-: 00:17:05.226 13:37:08 -- scripts/common.sh@337 -- # read -ra ver2 00:17:05.226 13:37:08 -- scripts/common.sh@338 -- # local 'op=<' 00:17:05.226 13:37:08 -- scripts/common.sh@340 -- # ver1_l=2 00:17:05.226 13:37:08 -- scripts/common.sh@341 -- # ver2_l=1 00:17:05.226 13:37:08 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:05.226 13:37:08 -- scripts/common.sh@344 -- # case "$op" in 00:17:05.226 13:37:08 -- scripts/common.sh@345 -- # : 1 00:17:05.226 13:37:08 -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:05.226 13:37:08 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.226 13:37:08 -- scripts/common.sh@365 -- # decimal 1 00:17:05.226 13:37:08 -- scripts/common.sh@353 -- # local d=1 00:17:05.226 13:37:08 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:05.226 13:37:08 -- scripts/common.sh@355 -- # echo 1 00:17:05.226 13:37:08 -- scripts/common.sh@365 -- # ver1[v]=1 00:17:05.226 13:37:08 -- scripts/common.sh@366 -- # decimal 2 00:17:05.226 13:37:08 -- scripts/common.sh@353 -- # local d=2 00:17:05.226 13:37:08 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:05.226 13:37:08 -- scripts/common.sh@355 -- # echo 2 00:17:05.226 13:37:08 -- scripts/common.sh@366 -- # ver2[v]=2 00:17:05.226 13:37:08 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:05.226 13:37:08 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:05.226 13:37:08 -- scripts/common.sh@368 -- # return 0 00:17:05.226 13:37:08 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:05.226 13:37:08 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:05.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.226 --rc genhtml_branch_coverage=1 00:17:05.226 --rc genhtml_function_coverage=1 00:17:05.226 --rc genhtml_legend=1 00:17:05.226 --rc geninfo_all_blocks=1 00:17:05.226 --rc geninfo_unexecuted_blocks=1 00:17:05.226 00:17:05.226 ' 00:17:05.226 13:37:08 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:05.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.226 --rc genhtml_branch_coverage=1 00:17:05.226 --rc genhtml_function_coverage=1 00:17:05.226 --rc genhtml_legend=1 00:17:05.226 --rc geninfo_all_blocks=1 00:17:05.226 --rc geninfo_unexecuted_blocks=1 00:17:05.226 00:17:05.226 ' 00:17:05.226 13:37:08 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:05.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.226 --rc genhtml_branch_coverage=1 00:17:05.226 --rc genhtml_function_coverage=1 00:17:05.226 --rc genhtml_legend=1 00:17:05.226 --rc geninfo_all_blocks=1 00:17:05.226 --rc geninfo_unexecuted_blocks=1 00:17:05.226 00:17:05.226 ' 00:17:05.226 13:37:08 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:05.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.226 --rc genhtml_branch_coverage=1 00:17:05.226 --rc genhtml_function_coverage=1 00:17:05.226 --rc genhtml_legend=1 00:17:05.226 --rc geninfo_all_blocks=1 00:17:05.226 --rc geninfo_unexecuted_blocks=1 00:17:05.226 00:17:05.226 ' 00:17:05.226 13:37:08 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:05.226 13:37:08 -- nvmf/common.sh@7 -- # uname -s 00:17:05.226 13:37:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.226 13:37:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.226 13:37:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.226 13:37:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.226 13:37:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.226 13:37:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.226 13:37:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.226 13:37:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.226 13:37:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.226 13:37:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.226 13:37:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e37304f3-121e-4ded-b956-b778f5717116 00:17:05.226 13:37:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=e37304f3-121e-4ded-b956-b778f5717116 00:17:05.226 13:37:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.226 13:37:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.226 13:37:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:05.226 13:37:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.226 13:37:08 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:05.226 13:37:08 -- scripts/common.sh@15 -- # shopt -s extglob 00:17:05.226 13:37:08 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.226 13:37:08 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.226 13:37:08 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.226 13:37:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.226 13:37:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.226 13:37:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.226 13:37:08 -- paths/export.sh@5 -- # export PATH 00:17:05.226 13:37:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.226 13:37:08 -- nvmf/common.sh@51 -- # : 0 00:17:05.226 13:37:08 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:05.226 13:37:08 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:05.226 13:37:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.226 13:37:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.226 13:37:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.226 13:37:08 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:05.226 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:05.226 13:37:08 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:05.226 13:37:08 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:05.226 13:37:08 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:05.226 13:37:08 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:17:05.226 13:37:08 -- spdk/autotest.sh@32 -- # uname -s 00:17:05.226 13:37:08 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:17:05.227 13:37:08 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:17:05.227 13:37:08 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:17:05.227 13:37:08 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:17:05.227 13:37:08 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:17:05.227 13:37:08 -- spdk/autotest.sh@44 -- # modprobe nbd 00:17:05.485 13:37:08 -- spdk/autotest.sh@46 -- # type -P udevadm 00:17:05.485 13:37:08 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:17:05.485 13:37:08 -- spdk/autotest.sh@48 -- # udevadm_pid=54547 00:17:05.485 13:37:08 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:17:05.485 13:37:08 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:17:05.485 13:37:08 -- pm/common@17 -- # local monitor 00:17:05.485 13:37:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:17:05.485 13:37:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:17:05.485 13:37:08 -- pm/common@25 -- # sleep 1 00:17:05.485 13:37:08 -- pm/common@21 -- # date +%s 00:17:05.485 13:37:08 -- pm/common@21 -- # date +%s 00:17:05.485 13:37:08 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732109828 00:17:05.485 13:37:08 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732109828 00:17:05.485 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732109828_collect-vmstat.pm.log 00:17:05.485 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732109828_collect-cpu-load.pm.log 00:17:06.421 13:37:09 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:17:06.421 13:37:09 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:17:06.421 13:37:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:06.421 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:17:06.421 13:37:09 -- spdk/autotest.sh@59 -- # create_test_list 00:17:06.421 13:37:09 -- common/autotest_common.sh@752 -- # xtrace_disable 00:17:06.421 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:17:06.421 13:37:09 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:17:06.421 13:37:09 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:17:06.421 13:37:09 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:17:06.421 13:37:09 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:17:06.421 13:37:09 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:17:06.421 13:37:09 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:17:06.421 13:37:09 -- common/autotest_common.sh@1457 -- # uname 00:17:06.421 13:37:09 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:17:06.421 13:37:09 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:17:06.421 13:37:09 -- common/autotest_common.sh@1477 -- # uname 00:17:06.421 13:37:09 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:17:06.421 13:37:09 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:17:06.421 13:37:09 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:17:06.421 lcov: LCOV version 1.15 00:17:06.421 13:37:09 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:17:24.539 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:17:24.539 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:17:42.711 13:37:43 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:17:42.711 13:37:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:42.711 13:37:43 -- common/autotest_common.sh@10 -- # set +x 00:17:42.711 13:37:43 -- spdk/autotest.sh@78 -- # rm -f 00:17:42.711 13:37:43 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:42.711 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:42.711 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:17:42.711 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:17:42.711 13:37:44 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:17:42.711 13:37:44 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:17:42.711 13:37:44 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:17:42.711 13:37:44 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:17:42.711 13:37:44 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:42.711 13:37:44 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:17:42.711 13:37:44 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:17:42.711 13:37:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:42.711 13:37:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:42.711 13:37:44 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:42.711 13:37:44 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:17:42.711 13:37:44 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:17:42.711 13:37:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:42.711 13:37:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:42.712 13:37:44 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:42.712 13:37:44 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:17:42.712 13:37:44 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:17:42.712 13:37:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:17:42.712 13:37:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:42.712 13:37:44 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:42.712 13:37:44 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:17:42.712 13:37:44 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:17:42.712 13:37:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:17:42.712 13:37:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:42.712 13:37:44 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:17:42.712 13:37:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:17:42.712 13:37:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:17:42.712 13:37:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:17:42.712 13:37:44 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:17:42.712 13:37:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:17:42.712 No valid GPT data, bailing 00:17:42.712 13:37:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:42.712 13:37:44 -- scripts/common.sh@394 -- # pt= 00:17:42.712 13:37:44 -- scripts/common.sh@395 -- # return 1 00:17:42.712 13:37:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:17:42.712 1+0 records in 00:17:42.712 1+0 records out 00:17:42.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00529866 s, 198 MB/s 00:17:42.712 13:37:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:17:42.712 13:37:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:17:42.712 13:37:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:17:42.712 13:37:44 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:17:42.712 13:37:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:17:42.712 No valid GPT data, bailing 00:17:42.712 13:37:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:42.712 13:37:44 -- scripts/common.sh@394 -- # pt= 00:17:42.712 13:37:44 -- scripts/common.sh@395 -- # return 1 00:17:42.712 13:37:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:17:42.712 1+0 records in 00:17:42.712 1+0 records out 00:17:42.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00400966 s, 262 MB/s 00:17:42.712 13:37:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:17:42.712 13:37:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:17:42.712 13:37:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:17:42.712 13:37:44 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:17:42.712 13:37:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:17:42.712 No valid GPT data, bailing 00:17:42.712 13:37:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:17:42.712 13:37:44 -- scripts/common.sh@394 -- # pt= 00:17:42.712 13:37:44 -- scripts/common.sh@395 -- # return 1 00:17:42.712 13:37:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:17:42.712 1+0 records in 00:17:42.712 1+0 records out 00:17:42.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00552073 s, 190 MB/s 00:17:42.712 13:37:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:17:42.712 13:37:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:17:42.712 13:37:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:17:42.712 13:37:44 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:17:42.712 13:37:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:17:42.712 No valid GPT data, bailing 00:17:42.712 13:37:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:17:42.712 13:37:44 -- scripts/common.sh@394 -- # pt= 00:17:42.712 13:37:44 -- scripts/common.sh@395 -- # return 1 00:17:42.712 13:37:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:17:42.712 1+0 records in 00:17:42.712 1+0 records out 00:17:42.712 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504976 s, 208 MB/s 00:17:42.712 13:37:44 -- spdk/autotest.sh@105 -- # sync 00:17:42.712 13:37:45 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:17:42.712 13:37:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:17:42.712 13:37:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:17:44.087 13:37:46 -- spdk/autotest.sh@111 -- # uname -s 00:17:44.087 13:37:46 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:17:44.087 13:37:46 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:17:44.087 13:37:46 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:17:45.023 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:45.023 Hugepages 00:17:45.023 node hugesize free / total 00:17:45.023 node0 1048576kB 0 / 0 00:17:45.023 node0 2048kB 0 / 0 00:17:45.023 00:17:45.023 Type BDF Vendor Device NUMA Driver Device Block devices 00:17:45.023 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:17:45.023 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:17:45.023 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:17:45.023 13:37:47 -- spdk/autotest.sh@117 -- # uname -s 00:17:45.023 13:37:47 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:17:45.023 13:37:47 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:17:45.023 13:37:47 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:45.959 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:45.959 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:45.959 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:45.959 13:37:48 -- common/autotest_common.sh@1517 -- # sleep 1 00:17:46.964 13:37:49 -- common/autotest_common.sh@1518 -- # bdfs=() 00:17:46.964 13:37:49 -- common/autotest_common.sh@1518 -- # local bdfs 00:17:46.964 13:37:49 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:17:46.964 13:37:49 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:17:46.964 13:37:49 -- common/autotest_common.sh@1498 -- # bdfs=() 00:17:46.964 13:37:49 -- common/autotest_common.sh@1498 -- # local bdfs 00:17:46.964 13:37:49 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:46.964 13:37:49 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:46.964 13:37:49 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:17:46.964 13:37:49 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:17:46.964 13:37:49 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:17:46.964 13:37:49 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:47.223 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:47.223 Waiting for block devices as requested 00:17:47.484 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:47.484 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:47.484 13:37:50 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:17:47.484 13:37:50 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:17:47.484 13:37:50 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:17:47.484 13:37:50 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:17:47.484 13:37:50 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:17:47.484 13:37:50 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:17:47.484 13:37:50 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:17:47.484 13:37:50 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:17:47.484 13:37:50 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:17:47.484 13:37:50 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:17:47.484 13:37:50 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:17:47.484 13:37:50 -- common/autotest_common.sh@1531 -- # grep oacs 00:17:47.484 13:37:50 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:17:47.484 13:37:50 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:17:47.484 13:37:50 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:17:47.484 13:37:50 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:17:47.484 13:37:50 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:17:47.484 13:37:50 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:17:47.484 13:37:50 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:17:47.484 13:37:50 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:17:47.484 13:37:50 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:17:47.484 13:37:50 -- common/autotest_common.sh@1543 -- # continue 00:17:47.484 13:37:50 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:17:47.484 13:37:50 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:17:47.484 13:37:50 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:17:47.484 13:37:50 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:17:47.484 13:37:50 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:17:47.484 13:37:50 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:17:47.484 13:37:50 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:17:47.484 13:37:50 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:17:47.484 13:37:50 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:17:47.484 13:37:50 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:17:47.484 13:37:50 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:17:47.484 13:37:50 -- common/autotest_common.sh@1531 -- # grep oacs 00:17:47.484 13:37:50 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:17:47.743 13:37:50 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:17:47.743 13:37:50 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:17:47.743 13:37:50 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:17:47.743 13:37:50 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:17:47.743 13:37:50 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:17:47.743 13:37:50 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:17:47.743 13:37:50 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:17:47.743 13:37:50 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:17:47.744 13:37:50 -- common/autotest_common.sh@1543 -- # continue 00:17:47.744 13:37:50 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:17:47.744 13:37:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:47.744 13:37:50 -- common/autotest_common.sh@10 -- # set +x 00:17:47.744 13:37:50 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:17:47.744 13:37:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:47.744 13:37:50 -- common/autotest_common.sh@10 -- # set +x 00:17:47.744 13:37:50 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:48.311 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:48.311 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:48.311 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:48.569 13:37:51 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:17:48.569 13:37:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:48.569 13:37:51 -- common/autotest_common.sh@10 -- # set +x 00:17:48.569 13:37:51 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:17:48.569 13:37:51 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:17:48.569 13:37:51 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:17:48.569 13:37:51 -- common/autotest_common.sh@1563 -- # bdfs=() 00:17:48.569 13:37:51 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:17:48.569 13:37:51 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:17:48.569 13:37:51 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:17:48.569 13:37:51 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:17:48.569 13:37:51 -- common/autotest_common.sh@1498 -- # bdfs=() 00:17:48.569 13:37:51 -- common/autotest_common.sh@1498 -- # local bdfs 00:17:48.569 13:37:51 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:48.569 13:37:51 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:48.569 13:37:51 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:17:48.569 13:37:51 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:17:48.569 13:37:51 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:17:48.569 13:37:51 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:17:48.569 13:37:51 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:17:48.569 13:37:51 -- common/autotest_common.sh@1566 -- # device=0x0010 00:17:48.569 13:37:51 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:17:48.570 13:37:51 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:17:48.570 13:37:51 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:17:48.570 13:37:51 -- common/autotest_common.sh@1566 -- # device=0x0010 00:17:48.570 13:37:51 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:17:48.570 13:37:51 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:17:48.570 13:37:51 -- common/autotest_common.sh@1572 -- # return 0 00:17:48.570 13:37:51 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:17:48.570 13:37:51 -- common/autotest_common.sh@1580 -- # return 0 00:17:48.570 13:37:51 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:17:48.570 13:37:51 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:17:48.570 13:37:51 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:17:48.570 13:37:51 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:17:48.570 13:37:51 -- spdk/autotest.sh@149 -- # timing_enter lib 00:17:48.570 13:37:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:48.570 13:37:51 -- common/autotest_common.sh@10 -- # set +x 00:17:48.570 13:37:51 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:17:48.570 13:37:51 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:17:48.570 13:37:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:48.570 13:37:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:48.570 13:37:51 -- common/autotest_common.sh@10 -- # set +x 00:17:48.570 ************************************ 00:17:48.570 START TEST env 00:17:48.570 ************************************ 00:17:48.570 13:37:51 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:17:48.570 * Looking for test storage... 00:17:48.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:17:48.570 13:37:51 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:48.570 13:37:51 env -- common/autotest_common.sh@1693 -- # lcov --version 00:17:48.570 13:37:51 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:48.828 13:37:51 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:48.828 13:37:51 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:48.828 13:37:51 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:48.828 13:37:51 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:48.828 13:37:51 env -- scripts/common.sh@336 -- # IFS=.-: 00:17:48.828 13:37:51 env -- scripts/common.sh@336 -- # read -ra ver1 00:17:48.828 13:37:51 env -- scripts/common.sh@337 -- # IFS=.-: 00:17:48.828 13:37:51 env -- scripts/common.sh@337 -- # read -ra ver2 00:17:48.828 13:37:51 env -- scripts/common.sh@338 -- # local 'op=<' 00:17:48.828 13:37:51 env -- scripts/common.sh@340 -- # ver1_l=2 00:17:48.828 13:37:51 env -- scripts/common.sh@341 -- # ver2_l=1 00:17:48.828 13:37:51 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:48.828 13:37:51 env -- scripts/common.sh@344 -- # case "$op" in 00:17:48.828 13:37:51 env -- scripts/common.sh@345 -- # : 1 00:17:48.828 13:37:51 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:48.828 13:37:51 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:48.828 13:37:51 env -- scripts/common.sh@365 -- # decimal 1 00:17:48.828 13:37:51 env -- scripts/common.sh@353 -- # local d=1 00:17:48.828 13:37:51 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:48.828 13:37:51 env -- scripts/common.sh@355 -- # echo 1 00:17:48.828 13:37:51 env -- scripts/common.sh@365 -- # ver1[v]=1 00:17:48.828 13:37:51 env -- scripts/common.sh@366 -- # decimal 2 00:17:48.828 13:37:51 env -- scripts/common.sh@353 -- # local d=2 00:17:48.828 13:37:51 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:48.828 13:37:51 env -- scripts/common.sh@355 -- # echo 2 00:17:48.828 13:37:51 env -- scripts/common.sh@366 -- # ver2[v]=2 00:17:48.828 13:37:51 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:48.828 13:37:51 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:48.828 13:37:51 env -- scripts/common.sh@368 -- # return 0 00:17:48.828 13:37:51 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:48.828 13:37:51 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:48.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.828 --rc genhtml_branch_coverage=1 00:17:48.829 --rc genhtml_function_coverage=1 00:17:48.829 --rc genhtml_legend=1 00:17:48.829 --rc geninfo_all_blocks=1 00:17:48.829 --rc geninfo_unexecuted_blocks=1 00:17:48.829 00:17:48.829 ' 00:17:48.829 13:37:51 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:48.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.829 --rc genhtml_branch_coverage=1 00:17:48.829 --rc genhtml_function_coverage=1 00:17:48.829 --rc genhtml_legend=1 00:17:48.829 --rc geninfo_all_blocks=1 00:17:48.829 --rc geninfo_unexecuted_blocks=1 00:17:48.829 00:17:48.829 ' 00:17:48.829 13:37:51 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:48.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.829 --rc genhtml_branch_coverage=1 00:17:48.829 --rc genhtml_function_coverage=1 00:17:48.829 --rc genhtml_legend=1 00:17:48.829 --rc geninfo_all_blocks=1 00:17:48.829 --rc geninfo_unexecuted_blocks=1 00:17:48.829 00:17:48.829 ' 00:17:48.829 13:37:51 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:48.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.829 --rc genhtml_branch_coverage=1 00:17:48.829 --rc genhtml_function_coverage=1 00:17:48.829 --rc genhtml_legend=1 00:17:48.829 --rc geninfo_all_blocks=1 00:17:48.829 --rc geninfo_unexecuted_blocks=1 00:17:48.829 00:17:48.829 ' 00:17:48.829 13:37:51 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:17:48.829 13:37:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:48.829 13:37:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:48.829 13:37:51 env -- common/autotest_common.sh@10 -- # set +x 00:17:48.829 ************************************ 00:17:48.829 START TEST env_memory 00:17:48.829 ************************************ 00:17:48.829 13:37:51 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:17:48.829 00:17:48.829 00:17:48.829 CUnit - A unit testing framework for C - Version 2.1-3 00:17:48.829 http://cunit.sourceforge.net/ 00:17:48.829 00:17:48.829 00:17:48.829 Suite: memory 00:17:48.829 Test: alloc and free memory map ...[2024-11-20 13:37:51.636359] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:17:48.829 passed 00:17:48.829 Test: mem map translation ...[2024-11-20 13:37:51.684220] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:17:48.829 [2024-11-20 13:37:51.684298] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:17:48.829 [2024-11-20 13:37:51.684392] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:17:48.829 [2024-11-20 13:37:51.684420] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:17:49.088 passed 00:17:49.088 Test: mem map registration ...[2024-11-20 13:37:51.761475] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:17:49.088 [2024-11-20 13:37:51.761557] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:17:49.088 passed 00:17:49.088 Test: mem map adjacent registrations ...passed 00:17:49.088 00:17:49.088 Run Summary: Type Total Ran Passed Failed Inactive 00:17:49.088 suites 1 1 n/a 0 0 00:17:49.088 tests 4 4 4 0 0 00:17:49.088 asserts 152 152 152 0 n/a 00:17:49.088 00:17:49.088 Elapsed time = 0.265 seconds 00:17:49.088 00:17:49.088 real 0m0.302s 00:17:49.088 user 0m0.271s 00:17:49.088 sys 0m0.025s 00:17:49.088 13:37:51 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.088 13:37:51 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:17:49.088 ************************************ 00:17:49.088 END TEST env_memory 00:17:49.088 ************************************ 00:17:49.088 13:37:51 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:17:49.088 13:37:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:49.088 13:37:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.088 13:37:51 env -- common/autotest_common.sh@10 -- # set +x 00:17:49.088 ************************************ 00:17:49.088 START TEST env_vtophys 00:17:49.088 ************************************ 00:17:49.088 13:37:51 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:17:49.088 EAL: lib.eal log level changed from notice to debug 00:17:49.088 EAL: Detected lcore 0 as core 0 on socket 0 00:17:49.088 EAL: Detected lcore 1 as core 0 on socket 0 00:17:49.088 EAL: Detected lcore 2 as core 0 on socket 0 00:17:49.088 EAL: Detected lcore 3 as core 0 on socket 0 00:17:49.088 EAL: Detected lcore 4 as core 0 on socket 0 00:17:49.088 EAL: Detected lcore 5 as core 0 on socket 0 00:17:49.088 EAL: Detected lcore 6 as core 0 on socket 0 00:17:49.088 EAL: Detected lcore 7 as core 0 on socket 0 00:17:49.088 EAL: Detected lcore 8 as core 0 on socket 0 00:17:49.088 EAL: Detected lcore 9 as core 0 on socket 0 00:17:49.088 EAL: Maximum logical cores by configuration: 128 00:17:49.088 EAL: Detected CPU lcores: 10 00:17:49.088 EAL: Detected NUMA nodes: 1 00:17:49.088 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:17:49.088 EAL: Detected shared linkage of DPDK 00:17:49.347 EAL: No shared files mode enabled, IPC will be disabled 00:17:49.347 EAL: Selected IOVA mode 'PA' 00:17:49.347 EAL: Probing VFIO support... 00:17:49.347 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:17:49.347 EAL: VFIO modules not loaded, skipping VFIO support... 00:17:49.347 EAL: Ask a virtual area of 0x2e000 bytes 00:17:49.347 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:17:49.347 EAL: Setting up physically contiguous memory... 00:17:49.347 EAL: Setting maximum number of open files to 524288 00:17:49.347 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:17:49.347 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:17:49.347 EAL: Ask a virtual area of 0x61000 bytes 00:17:49.347 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:17:49.347 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:49.347 EAL: Ask a virtual area of 0x400000000 bytes 00:17:49.347 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:17:49.347 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:17:49.347 EAL: Ask a virtual area of 0x61000 bytes 00:17:49.347 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:17:49.347 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:49.347 EAL: Ask a virtual area of 0x400000000 bytes 00:17:49.347 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:17:49.347 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:17:49.347 EAL: Ask a virtual area of 0x61000 bytes 00:17:49.347 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:17:49.347 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:49.347 EAL: Ask a virtual area of 0x400000000 bytes 00:17:49.347 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:17:49.347 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:17:49.347 EAL: Ask a virtual area of 0x61000 bytes 00:17:49.347 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:17:49.347 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:49.347 EAL: Ask a virtual area of 0x400000000 bytes 00:17:49.347 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:17:49.347 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:17:49.347 EAL: Hugepages will be freed exactly as allocated. 00:17:49.347 EAL: No shared files mode enabled, IPC is disabled 00:17:49.347 EAL: No shared files mode enabled, IPC is disabled 00:17:49.347 EAL: TSC frequency is ~2200000 KHz 00:17:49.347 EAL: Main lcore 0 is ready (tid=7f941e2f1a40;cpuset=[0]) 00:17:49.347 EAL: Trying to obtain current memory policy. 00:17:49.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:49.347 EAL: Restoring previous memory policy: 0 00:17:49.347 EAL: request: mp_malloc_sync 00:17:49.347 EAL: No shared files mode enabled, IPC is disabled 00:17:49.347 EAL: Heap on socket 0 was expanded by 2MB 00:17:49.347 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:17:49.347 EAL: No PCI address specified using 'addr=' in: bus=pci 00:17:49.347 EAL: Mem event callback 'spdk:(nil)' registered 00:17:49.347 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:17:49.347 00:17:49.347 00:17:49.347 CUnit - A unit testing framework for C - Version 2.1-3 00:17:49.347 http://cunit.sourceforge.net/ 00:17:49.347 00:17:49.347 00:17:49.347 Suite: components_suite 00:17:49.915 Test: vtophys_malloc_test ...passed 00:17:49.915 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:17:49.915 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:49.915 EAL: Restoring previous memory policy: 4 00:17:49.915 EAL: Calling mem event callback 'spdk:(nil)' 00:17:49.915 EAL: request: mp_malloc_sync 00:17:49.915 EAL: No shared files mode enabled, IPC is disabled 00:17:49.915 EAL: Heap on socket 0 was expanded by 4MB 00:17:49.915 EAL: Calling mem event callback 'spdk:(nil)' 00:17:49.915 EAL: request: mp_malloc_sync 00:17:49.915 EAL: No shared files mode enabled, IPC is disabled 00:17:49.915 EAL: Heap on socket 0 was shrunk by 4MB 00:17:49.915 EAL: Trying to obtain current memory policy. 00:17:49.915 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:49.915 EAL: Restoring previous memory policy: 4 00:17:49.915 EAL: Calling mem event callback 'spdk:(nil)' 00:17:49.915 EAL: request: mp_malloc_sync 00:17:49.915 EAL: No shared files mode enabled, IPC is disabled 00:17:49.915 EAL: Heap on socket 0 was expanded by 6MB 00:17:49.915 EAL: Calling mem event callback 'spdk:(nil)' 00:17:49.915 EAL: request: mp_malloc_sync 00:17:49.915 EAL: No shared files mode enabled, IPC is disabled 00:17:49.915 EAL: Heap on socket 0 was shrunk by 6MB 00:17:49.915 EAL: Trying to obtain current memory policy. 00:17:49.915 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:49.915 EAL: Restoring previous memory policy: 4 00:17:49.915 EAL: Calling mem event callback 'spdk:(nil)' 00:17:49.915 EAL: request: mp_malloc_sync 00:17:49.915 EAL: No shared files mode enabled, IPC is disabled 00:17:49.915 EAL: Heap on socket 0 was expanded by 10MB 00:17:49.915 EAL: Calling mem event callback 'spdk:(nil)' 00:17:49.915 EAL: request: mp_malloc_sync 00:17:49.915 EAL: No shared files mode enabled, IPC is disabled 00:17:49.915 EAL: Heap on socket 0 was shrunk by 10MB 00:17:49.915 EAL: Trying to obtain current memory policy. 00:17:49.915 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:49.915 EAL: Restoring previous memory policy: 4 00:17:49.915 EAL: Calling mem event callback 'spdk:(nil)' 00:17:49.915 EAL: request: mp_malloc_sync 00:17:49.915 EAL: No shared files mode enabled, IPC is disabled 00:17:49.915 EAL: Heap on socket 0 was expanded by 18MB 00:17:49.915 EAL: Calling mem event callback 'spdk:(nil)' 00:17:49.915 EAL: request: mp_malloc_sync 00:17:49.915 EAL: No shared files mode enabled, IPC is disabled 00:17:49.915 EAL: Heap on socket 0 was shrunk by 18MB 00:17:49.915 EAL: Trying to obtain current memory policy. 00:17:49.915 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:49.915 EAL: Restoring previous memory policy: 4 00:17:49.915 EAL: Calling mem event callback 'spdk:(nil)' 00:17:49.915 EAL: request: mp_malloc_sync 00:17:49.915 EAL: No shared files mode enabled, IPC is disabled 00:17:49.915 EAL: Heap on socket 0 was expanded by 34MB 00:17:49.915 EAL: Calling mem event callback 'spdk:(nil)' 00:17:49.915 EAL: request: mp_malloc_sync 00:17:49.915 EAL: No shared files mode enabled, IPC is disabled 00:17:49.915 EAL: Heap on socket 0 was shrunk by 34MB 00:17:49.915 EAL: Trying to obtain current memory policy. 00:17:49.915 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:50.174 EAL: Restoring previous memory policy: 4 00:17:50.174 EAL: Calling mem event callback 'spdk:(nil)' 00:17:50.174 EAL: request: mp_malloc_sync 00:17:50.174 EAL: No shared files mode enabled, IPC is disabled 00:17:50.174 EAL: Heap on socket 0 was expanded by 66MB 00:17:50.174 EAL: Calling mem event callback 'spdk:(nil)' 00:17:50.174 EAL: request: mp_malloc_sync 00:17:50.174 EAL: No shared files mode enabled, IPC is disabled 00:17:50.174 EAL: Heap on socket 0 was shrunk by 66MB 00:17:50.174 EAL: Trying to obtain current memory policy. 00:17:50.174 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:50.174 EAL: Restoring previous memory policy: 4 00:17:50.174 EAL: Calling mem event callback 'spdk:(nil)' 00:17:50.174 EAL: request: mp_malloc_sync 00:17:50.174 EAL: No shared files mode enabled, IPC is disabled 00:17:50.174 EAL: Heap on socket 0 was expanded by 130MB 00:17:50.432 EAL: Calling mem event callback 'spdk:(nil)' 00:17:50.432 EAL: request: mp_malloc_sync 00:17:50.432 EAL: No shared files mode enabled, IPC is disabled 00:17:50.432 EAL: Heap on socket 0 was shrunk by 130MB 00:17:50.691 EAL: Trying to obtain current memory policy. 00:17:50.691 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:50.691 EAL: Restoring previous memory policy: 4 00:17:50.691 EAL: Calling mem event callback 'spdk:(nil)' 00:17:50.691 EAL: request: mp_malloc_sync 00:17:50.691 EAL: No shared files mode enabled, IPC is disabled 00:17:50.691 EAL: Heap on socket 0 was expanded by 258MB 00:17:51.286 EAL: Calling mem event callback 'spdk:(nil)' 00:17:51.286 EAL: request: mp_malloc_sync 00:17:51.286 EAL: No shared files mode enabled, IPC is disabled 00:17:51.286 EAL: Heap on socket 0 was shrunk by 258MB 00:17:51.545 EAL: Trying to obtain current memory policy. 00:17:51.545 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:51.803 EAL: Restoring previous memory policy: 4 00:17:51.803 EAL: Calling mem event callback 'spdk:(nil)' 00:17:51.803 EAL: request: mp_malloc_sync 00:17:51.803 EAL: No shared files mode enabled, IPC is disabled 00:17:51.803 EAL: Heap on socket 0 was expanded by 514MB 00:17:52.739 EAL: Calling mem event callback 'spdk:(nil)' 00:17:52.739 EAL: request: mp_malloc_sync 00:17:52.739 EAL: No shared files mode enabled, IPC is disabled 00:17:52.739 EAL: Heap on socket 0 was shrunk by 514MB 00:17:53.351 EAL: Trying to obtain current memory policy. 00:17:53.351 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:53.613 EAL: Restoring previous memory policy: 4 00:17:53.613 EAL: Calling mem event callback 'spdk:(nil)' 00:17:53.613 EAL: request: mp_malloc_sync 00:17:53.613 EAL: No shared files mode enabled, IPC is disabled 00:17:53.613 EAL: Heap on socket 0 was expanded by 1026MB 00:17:55.511 EAL: Calling mem event callback 'spdk:(nil)' 00:17:55.511 EAL: request: mp_malloc_sync 00:17:55.511 EAL: No shared files mode enabled, IPC is disabled 00:17:55.511 EAL: Heap on socket 0 was shrunk by 1026MB 00:17:56.880 passed 00:17:56.880 00:17:56.880 Run Summary: Type Total Ran Passed Failed Inactive 00:17:56.880 suites 1 1 n/a 0 0 00:17:56.880 tests 2 2 2 0 0 00:17:56.880 asserts 5698 5698 5698 0 n/a 00:17:56.880 00:17:56.880 Elapsed time = 7.464 seconds 00:17:56.880 EAL: Calling mem event callback 'spdk:(nil)' 00:17:56.880 EAL: request: mp_malloc_sync 00:17:56.880 EAL: No shared files mode enabled, IPC is disabled 00:17:56.880 EAL: Heap on socket 0 was shrunk by 2MB 00:17:56.880 EAL: No shared files mode enabled, IPC is disabled 00:17:56.880 EAL: No shared files mode enabled, IPC is disabled 00:17:56.880 EAL: No shared files mode enabled, IPC is disabled 00:17:56.880 00:17:56.880 real 0m7.813s 00:17:56.880 user 0m6.630s 00:17:56.880 sys 0m1.011s 00:17:56.880 13:37:59 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:56.880 13:37:59 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:17:56.880 ************************************ 00:17:56.880 END TEST env_vtophys 00:17:56.880 ************************************ 00:17:56.880 13:37:59 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:17:56.880 13:37:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:56.880 13:37:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:56.880 13:37:59 env -- common/autotest_common.sh@10 -- # set +x 00:17:57.138 ************************************ 00:17:57.138 START TEST env_pci 00:17:57.138 ************************************ 00:17:57.138 13:37:59 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:17:57.138 00:17:57.138 00:17:57.138 CUnit - A unit testing framework for C - Version 2.1-3 00:17:57.138 http://cunit.sourceforge.net/ 00:17:57.138 00:17:57.138 00:17:57.138 Suite: pci 00:17:57.138 Test: pci_hook ...[2024-11-20 13:37:59.831166] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56875 has claimed it 00:17:57.138 passed 00:17:57.138 00:17:57.138 EAL: Cannot find device (10000:00:01.0) 00:17:57.138 EAL: Failed to attach device on primary process 00:17:57.138 Run Summary: Type Total Ran Passed Failed Inactive 00:17:57.138 suites 1 1 n/a 0 0 00:17:57.138 tests 1 1 1 0 0 00:17:57.138 asserts 25 25 25 0 n/a 00:17:57.138 00:17:57.138 Elapsed time = 0.006 seconds 00:17:57.138 00:17:57.138 real 0m0.070s 00:17:57.138 user 0m0.037s 00:17:57.138 sys 0m0.030s 00:17:57.138 13:37:59 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.138 13:37:59 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:17:57.138 ************************************ 00:17:57.138 END TEST env_pci 00:17:57.138 ************************************ 00:17:57.138 13:37:59 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:17:57.138 13:37:59 env -- env/env.sh@15 -- # uname 00:17:57.138 13:37:59 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:17:57.138 13:37:59 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:17:57.138 13:37:59 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:17:57.138 13:37:59 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:57.138 13:37:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.138 13:37:59 env -- common/autotest_common.sh@10 -- # set +x 00:17:57.138 ************************************ 00:17:57.138 START TEST env_dpdk_post_init 00:17:57.138 ************************************ 00:17:57.138 13:37:59 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:17:57.138 EAL: Detected CPU lcores: 10 00:17:57.138 EAL: Detected NUMA nodes: 1 00:17:57.138 EAL: Detected shared linkage of DPDK 00:17:57.138 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:17:57.138 EAL: Selected IOVA mode 'PA' 00:17:57.396 TELEMETRY: No legacy callbacks, legacy socket not created 00:17:57.396 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:17:57.396 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:17:57.396 Starting DPDK initialization... 00:17:57.396 Starting SPDK post initialization... 00:17:57.396 SPDK NVMe probe 00:17:57.396 Attaching to 0000:00:10.0 00:17:57.396 Attaching to 0000:00:11.0 00:17:57.396 Attached to 0000:00:10.0 00:17:57.396 Attached to 0000:00:11.0 00:17:57.396 Cleaning up... 00:17:57.396 00:17:57.396 real 0m0.314s 00:17:57.396 user 0m0.110s 00:17:57.396 sys 0m0.103s 00:17:57.396 13:38:00 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.396 13:38:00 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:17:57.396 ************************************ 00:17:57.396 END TEST env_dpdk_post_init 00:17:57.396 ************************************ 00:17:57.396 13:38:00 env -- env/env.sh@26 -- # uname 00:17:57.396 13:38:00 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:17:57.396 13:38:00 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:17:57.396 13:38:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:57.396 13:38:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.396 13:38:00 env -- common/autotest_common.sh@10 -- # set +x 00:17:57.396 ************************************ 00:17:57.396 START TEST env_mem_callbacks 00:17:57.396 ************************************ 00:17:57.396 13:38:00 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:17:57.654 EAL: Detected CPU lcores: 10 00:17:57.654 EAL: Detected NUMA nodes: 1 00:17:57.654 EAL: Detected shared linkage of DPDK 00:17:57.654 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:17:57.654 EAL: Selected IOVA mode 'PA' 00:17:57.654 TELEMETRY: No legacy callbacks, legacy socket not created 00:17:57.654 00:17:57.654 00:17:57.654 CUnit - A unit testing framework for C - Version 2.1-3 00:17:57.654 http://cunit.sourceforge.net/ 00:17:57.654 00:17:57.654 00:17:57.654 Suite: memory 00:17:57.654 Test: test ... 00:17:57.654 register 0x200000200000 2097152 00:17:57.654 malloc 3145728 00:17:57.654 register 0x200000400000 4194304 00:17:57.654 buf 0x2000004fffc0 len 3145728 PASSED 00:17:57.654 malloc 64 00:17:57.654 buf 0x2000004ffec0 len 64 PASSED 00:17:57.654 malloc 4194304 00:17:57.654 register 0x200000800000 6291456 00:17:57.654 buf 0x2000009fffc0 len 4194304 PASSED 00:17:57.654 free 0x2000004fffc0 3145728 00:17:57.654 free 0x2000004ffec0 64 00:17:57.654 unregister 0x200000400000 4194304 PASSED 00:17:57.654 free 0x2000009fffc0 4194304 00:17:57.654 unregister 0x200000800000 6291456 PASSED 00:17:57.654 malloc 8388608 00:17:57.654 register 0x200000400000 10485760 00:17:57.654 buf 0x2000005fffc0 len 8388608 PASSED 00:17:57.654 free 0x2000005fffc0 8388608 00:17:57.654 unregister 0x200000400000 10485760 PASSED 00:17:57.654 passed 00:17:57.654 00:17:57.654 Run Summary: Type Total Ran Passed Failed Inactive 00:17:57.654 suites 1 1 n/a 0 0 00:17:57.654 tests 1 1 1 0 0 00:17:57.654 asserts 15 15 15 0 n/a 00:17:57.654 00:17:57.654 Elapsed time = 0.060 seconds 00:17:57.654 00:17:57.654 real 0m0.254s 00:17:57.654 user 0m0.087s 00:17:57.654 sys 0m0.065s 00:17:57.654 13:38:00 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.654 13:38:00 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:17:57.654 ************************************ 00:17:57.654 END TEST env_mem_callbacks 00:17:57.654 ************************************ 00:17:57.923 00:17:57.923 real 0m9.201s 00:17:57.923 user 0m7.307s 00:17:57.923 sys 0m1.489s 00:17:57.923 13:38:00 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.923 13:38:00 env -- common/autotest_common.sh@10 -- # set +x 00:17:57.923 ************************************ 00:17:57.923 END TEST env 00:17:57.923 ************************************ 00:17:57.923 13:38:00 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:17:57.923 13:38:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:57.923 13:38:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.923 13:38:00 -- common/autotest_common.sh@10 -- # set +x 00:17:57.923 ************************************ 00:17:57.923 START TEST rpc 00:17:57.923 ************************************ 00:17:57.923 13:38:00 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:17:57.923 * Looking for test storage... 00:17:57.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:17:57.923 13:38:00 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:57.923 13:38:00 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:17:57.923 13:38:00 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:57.923 13:38:00 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:57.923 13:38:00 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:57.923 13:38:00 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:57.923 13:38:00 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:57.923 13:38:00 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:57.923 13:38:00 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:57.923 13:38:00 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:57.923 13:38:00 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:57.923 13:38:00 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:57.923 13:38:00 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:57.923 13:38:00 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:57.923 13:38:00 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:57.923 13:38:00 rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:57.923 13:38:00 rpc -- scripts/common.sh@345 -- # : 1 00:17:57.923 13:38:00 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:57.923 13:38:00 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:57.923 13:38:00 rpc -- scripts/common.sh@365 -- # decimal 1 00:17:57.923 13:38:00 rpc -- scripts/common.sh@353 -- # local d=1 00:17:57.923 13:38:00 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:57.923 13:38:00 rpc -- scripts/common.sh@355 -- # echo 1 00:17:57.923 13:38:00 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:57.923 13:38:00 rpc -- scripts/common.sh@366 -- # decimal 2 00:17:57.923 13:38:00 rpc -- scripts/common.sh@353 -- # local d=2 00:17:57.923 13:38:00 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:57.923 13:38:00 rpc -- scripts/common.sh@355 -- # echo 2 00:17:57.923 13:38:00 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:57.923 13:38:00 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:57.923 13:38:00 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:57.923 13:38:00 rpc -- scripts/common.sh@368 -- # return 0 00:17:57.923 13:38:00 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:57.923 13:38:00 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:57.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.923 --rc genhtml_branch_coverage=1 00:17:57.923 --rc genhtml_function_coverage=1 00:17:57.923 --rc genhtml_legend=1 00:17:57.923 --rc geninfo_all_blocks=1 00:17:57.923 --rc geninfo_unexecuted_blocks=1 00:17:57.923 00:17:57.923 ' 00:17:57.923 13:38:00 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:57.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.923 --rc genhtml_branch_coverage=1 00:17:57.923 --rc genhtml_function_coverage=1 00:17:57.923 --rc genhtml_legend=1 00:17:57.923 --rc geninfo_all_blocks=1 00:17:57.923 --rc geninfo_unexecuted_blocks=1 00:17:57.923 00:17:57.923 ' 00:17:57.923 13:38:00 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:57.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.923 --rc genhtml_branch_coverage=1 00:17:57.923 --rc genhtml_function_coverage=1 00:17:57.923 --rc genhtml_legend=1 00:17:57.923 --rc geninfo_all_blocks=1 00:17:57.923 --rc geninfo_unexecuted_blocks=1 00:17:57.923 00:17:57.923 ' 00:17:57.923 13:38:00 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:57.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.923 --rc genhtml_branch_coverage=1 00:17:57.923 --rc genhtml_function_coverage=1 00:17:57.923 --rc genhtml_legend=1 00:17:57.923 --rc geninfo_all_blocks=1 00:17:57.923 --rc geninfo_unexecuted_blocks=1 00:17:57.923 00:17:57.923 ' 00:17:57.923 13:38:00 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57002 00:17:57.923 13:38:00 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:17:57.923 13:38:00 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:58.181 13:38:00 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57002 00:17:58.181 13:38:00 rpc -- common/autotest_common.sh@835 -- # '[' -z 57002 ']' 00:17:58.181 13:38:00 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.181 13:38:00 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.181 13:38:00 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.181 13:38:00 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.181 13:38:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.181 [2024-11-20 13:38:00.972415] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:17:58.181 [2024-11-20 13:38:00.972597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57002 ] 00:17:58.440 [2024-11-20 13:38:01.161818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.440 [2024-11-20 13:38:01.314835] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:17:58.440 [2024-11-20 13:38:01.314939] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57002' to capture a snapshot of events at runtime. 00:17:58.440 [2024-11-20 13:38:01.314968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.440 [2024-11-20 13:38:01.314985] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.440 [2024-11-20 13:38:01.314997] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57002 for offline analysis/debug. 00:17:58.440 [2024-11-20 13:38:01.316435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.373 13:38:02 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.373 13:38:02 rpc -- common/autotest_common.sh@868 -- # return 0 00:17:59.373 13:38:02 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:17:59.373 13:38:02 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:17:59.373 13:38:02 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:17:59.373 13:38:02 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:17:59.373 13:38:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:59.373 13:38:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.373 13:38:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.373 ************************************ 00:17:59.373 START TEST rpc_integrity 00:17:59.373 ************************************ 00:17:59.373 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:17:59.373 13:38:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:59.373 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.373 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:59.373 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.373 13:38:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:17:59.373 13:38:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:17:59.632 13:38:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:17:59.632 13:38:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:17:59.632 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.632 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:59.632 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.632 13:38:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:17:59.632 13:38:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:17:59.632 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.632 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:59.632 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.632 13:38:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:17:59.632 { 00:17:59.632 "name": "Malloc0", 00:17:59.632 "aliases": [ 00:17:59.632 "80e4d6fd-2c27-4e5c-9dd2-437c90eb373e" 00:17:59.632 ], 00:17:59.632 "product_name": "Malloc disk", 00:17:59.632 "block_size": 512, 00:17:59.632 "num_blocks": 16384, 00:17:59.632 "uuid": "80e4d6fd-2c27-4e5c-9dd2-437c90eb373e", 00:17:59.632 "assigned_rate_limits": { 00:17:59.632 "rw_ios_per_sec": 0, 00:17:59.632 "rw_mbytes_per_sec": 0, 00:17:59.632 "r_mbytes_per_sec": 0, 00:17:59.632 "w_mbytes_per_sec": 0 00:17:59.632 }, 00:17:59.632 "claimed": false, 00:17:59.632 "zoned": false, 00:17:59.632 "supported_io_types": { 00:17:59.632 "read": true, 00:17:59.632 "write": true, 00:17:59.632 "unmap": true, 00:17:59.632 "flush": true, 00:17:59.632 "reset": true, 00:17:59.632 "nvme_admin": false, 00:17:59.632 "nvme_io": false, 00:17:59.632 "nvme_io_md": false, 00:17:59.632 "write_zeroes": true, 00:17:59.632 "zcopy": true, 00:17:59.632 "get_zone_info": false, 00:17:59.632 "zone_management": false, 00:17:59.632 "zone_append": false, 00:17:59.632 "compare": false, 00:17:59.632 "compare_and_write": false, 00:17:59.632 "abort": true, 00:17:59.632 "seek_hole": false, 00:17:59.632 "seek_data": false, 00:17:59.632 "copy": true, 00:17:59.632 "nvme_iov_md": false 00:17:59.632 }, 00:17:59.632 "memory_domains": [ 00:17:59.632 { 00:17:59.632 "dma_device_id": "system", 00:17:59.632 "dma_device_type": 1 00:17:59.632 }, 00:17:59.632 { 00:17:59.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.632 "dma_device_type": 2 00:17:59.632 } 00:17:59.632 ], 00:17:59.632 "driver_specific": {} 00:17:59.632 } 00:17:59.632 ]' 00:17:59.632 13:38:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:17:59.632 13:38:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:17:59.632 13:38:02 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:17:59.632 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.633 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:59.633 [2024-11-20 13:38:02.424019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:17:59.633 [2024-11-20 13:38:02.424111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.633 [2024-11-20 13:38:02.424146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:59.633 [2024-11-20 13:38:02.424168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.633 [2024-11-20 13:38:02.427337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.633 [2024-11-20 13:38:02.427393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:17:59.633 Passthru0 00:17:59.633 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.633 13:38:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:17:59.633 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.633 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:59.633 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.633 13:38:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:17:59.633 { 00:17:59.633 "name": "Malloc0", 00:17:59.633 "aliases": [ 00:17:59.633 "80e4d6fd-2c27-4e5c-9dd2-437c90eb373e" 00:17:59.633 ], 00:17:59.633 "product_name": "Malloc disk", 00:17:59.633 "block_size": 512, 00:17:59.633 "num_blocks": 16384, 00:17:59.633 "uuid": "80e4d6fd-2c27-4e5c-9dd2-437c90eb373e", 00:17:59.633 "assigned_rate_limits": { 00:17:59.633 "rw_ios_per_sec": 0, 00:17:59.633 "rw_mbytes_per_sec": 0, 00:17:59.633 "r_mbytes_per_sec": 0, 00:17:59.633 "w_mbytes_per_sec": 0 00:17:59.633 }, 00:17:59.633 "claimed": true, 00:17:59.633 "claim_type": "exclusive_write", 00:17:59.633 "zoned": false, 00:17:59.633 "supported_io_types": { 00:17:59.633 "read": true, 00:17:59.633 "write": true, 00:17:59.633 "unmap": true, 00:17:59.633 "flush": true, 00:17:59.633 "reset": true, 00:17:59.633 "nvme_admin": false, 00:17:59.633 "nvme_io": false, 00:17:59.633 "nvme_io_md": false, 00:17:59.633 "write_zeroes": true, 00:17:59.633 "zcopy": true, 00:17:59.633 "get_zone_info": false, 00:17:59.633 "zone_management": false, 00:17:59.633 "zone_append": false, 00:17:59.633 "compare": false, 00:17:59.633 "compare_and_write": false, 00:17:59.633 "abort": true, 00:17:59.633 "seek_hole": false, 00:17:59.633 "seek_data": false, 00:17:59.633 "copy": true, 00:17:59.633 "nvme_iov_md": false 00:17:59.633 }, 00:17:59.633 "memory_domains": [ 00:17:59.633 { 00:17:59.633 "dma_device_id": "system", 00:17:59.633 "dma_device_type": 1 00:17:59.633 }, 00:17:59.633 { 00:17:59.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.633 "dma_device_type": 2 00:17:59.633 } 00:17:59.633 ], 00:17:59.633 "driver_specific": {} 00:17:59.633 }, 00:17:59.633 { 00:17:59.633 "name": "Passthru0", 00:17:59.633 "aliases": [ 00:17:59.633 "2076978d-bfff-505b-aa1b-80914031050f" 00:17:59.633 ], 00:17:59.633 "product_name": "passthru", 00:17:59.633 "block_size": 512, 00:17:59.633 "num_blocks": 16384, 00:17:59.633 "uuid": "2076978d-bfff-505b-aa1b-80914031050f", 00:17:59.633 "assigned_rate_limits": { 00:17:59.633 "rw_ios_per_sec": 0, 00:17:59.633 "rw_mbytes_per_sec": 0, 00:17:59.633 "r_mbytes_per_sec": 0, 00:17:59.633 "w_mbytes_per_sec": 0 00:17:59.633 }, 00:17:59.633 "claimed": false, 00:17:59.633 "zoned": false, 00:17:59.633 "supported_io_types": { 00:17:59.633 "read": true, 00:17:59.633 "write": true, 00:17:59.633 "unmap": true, 00:17:59.633 "flush": true, 00:17:59.633 "reset": true, 00:17:59.633 "nvme_admin": false, 00:17:59.633 "nvme_io": false, 00:17:59.633 "nvme_io_md": false, 00:17:59.633 "write_zeroes": true, 00:17:59.633 "zcopy": true, 00:17:59.633 "get_zone_info": false, 00:17:59.633 "zone_management": false, 00:17:59.633 "zone_append": false, 00:17:59.633 "compare": false, 00:17:59.633 "compare_and_write": false, 00:17:59.633 "abort": true, 00:17:59.633 "seek_hole": false, 00:17:59.633 "seek_data": false, 00:17:59.633 "copy": true, 00:17:59.633 "nvme_iov_md": false 00:17:59.633 }, 00:17:59.633 "memory_domains": [ 00:17:59.633 { 00:17:59.633 "dma_device_id": "system", 00:17:59.633 "dma_device_type": 1 00:17:59.633 }, 00:17:59.633 { 00:17:59.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.633 "dma_device_type": 2 00:17:59.633 } 00:17:59.633 ], 00:17:59.633 "driver_specific": { 00:17:59.633 "passthru": { 00:17:59.633 "name": "Passthru0", 00:17:59.633 "base_bdev_name": "Malloc0" 00:17:59.633 } 00:17:59.633 } 00:17:59.633 } 00:17:59.633 ]' 00:17:59.633 13:38:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:17:59.633 13:38:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:17:59.633 13:38:02 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:17:59.633 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.633 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:59.633 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.633 13:38:02 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:59.633 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.633 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:59.892 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.892 13:38:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:59.892 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.892 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:59.892 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.892 13:38:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:17:59.892 13:38:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:17:59.892 13:38:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:17:59.892 00:17:59.892 real 0m0.363s 00:17:59.892 user 0m0.220s 00:17:59.892 sys 0m0.037s 00:17:59.892 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.892 13:38:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:59.892 ************************************ 00:17:59.892 END TEST rpc_integrity 00:17:59.892 ************************************ 00:17:59.892 13:38:02 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:17:59.892 13:38:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:59.892 13:38:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.892 13:38:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.892 ************************************ 00:17:59.892 START TEST rpc_plugins 00:17:59.892 ************************************ 00:17:59.892 13:38:02 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:17:59.892 13:38:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:17:59.892 13:38:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.892 13:38:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:59.892 13:38:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.892 13:38:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:17:59.892 13:38:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:17:59.892 13:38:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.892 13:38:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:59.892 13:38:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.892 13:38:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:17:59.892 { 00:17:59.892 "name": "Malloc1", 00:17:59.892 "aliases": [ 00:17:59.892 "de5a8d17-b3d2-473f-b802-41da36e237ab" 00:17:59.892 ], 00:17:59.892 "product_name": "Malloc disk", 00:17:59.892 "block_size": 4096, 00:17:59.892 "num_blocks": 256, 00:17:59.892 "uuid": "de5a8d17-b3d2-473f-b802-41da36e237ab", 00:17:59.892 "assigned_rate_limits": { 00:17:59.892 "rw_ios_per_sec": 0, 00:17:59.892 "rw_mbytes_per_sec": 0, 00:17:59.892 "r_mbytes_per_sec": 0, 00:17:59.892 "w_mbytes_per_sec": 0 00:17:59.892 }, 00:17:59.892 "claimed": false, 00:17:59.892 "zoned": false, 00:17:59.892 "supported_io_types": { 00:17:59.892 "read": true, 00:17:59.892 "write": true, 00:17:59.892 "unmap": true, 00:17:59.892 "flush": true, 00:17:59.892 "reset": true, 00:17:59.892 "nvme_admin": false, 00:17:59.892 "nvme_io": false, 00:17:59.892 "nvme_io_md": false, 00:17:59.892 "write_zeroes": true, 00:17:59.892 "zcopy": true, 00:17:59.892 "get_zone_info": false, 00:17:59.892 "zone_management": false, 00:17:59.892 "zone_append": false, 00:17:59.892 "compare": false, 00:17:59.892 "compare_and_write": false, 00:17:59.892 "abort": true, 00:17:59.892 "seek_hole": false, 00:17:59.892 "seek_data": false, 00:17:59.892 "copy": true, 00:17:59.892 "nvme_iov_md": false 00:17:59.892 }, 00:17:59.892 "memory_domains": [ 00:17:59.892 { 00:17:59.892 "dma_device_id": "system", 00:17:59.892 "dma_device_type": 1 00:17:59.892 }, 00:17:59.892 { 00:17:59.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.892 "dma_device_type": 2 00:17:59.892 } 00:17:59.892 ], 00:17:59.892 "driver_specific": {} 00:17:59.892 } 00:17:59.892 ]' 00:17:59.892 13:38:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:17:59.892 13:38:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:17:59.892 13:38:02 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:17:59.892 13:38:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.893 13:38:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:59.893 13:38:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.893 13:38:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:17:59.893 13:38:02 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.893 13:38:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:59.893 13:38:02 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.893 13:38:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:17:59.893 13:38:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:18:00.152 13:38:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:18:00.152 00:18:00.152 real 0m0.166s 00:18:00.152 user 0m0.104s 00:18:00.152 sys 0m0.018s 00:18:00.152 13:38:02 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.152 13:38:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:18:00.152 ************************************ 00:18:00.152 END TEST rpc_plugins 00:18:00.152 ************************************ 00:18:00.152 13:38:02 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:18:00.152 13:38:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:00.152 13:38:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.152 13:38:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.152 ************************************ 00:18:00.152 START TEST rpc_trace_cmd_test 00:18:00.152 ************************************ 00:18:00.152 13:38:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:18:00.152 13:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:18:00.152 13:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:18:00.152 13:38:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.152 13:38:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.152 13:38:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.152 13:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:18:00.152 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57002", 00:18:00.152 "tpoint_group_mask": "0x8", 00:18:00.152 "iscsi_conn": { 00:18:00.152 "mask": "0x2", 00:18:00.152 "tpoint_mask": "0x0" 00:18:00.152 }, 00:18:00.152 "scsi": { 00:18:00.152 "mask": "0x4", 00:18:00.152 "tpoint_mask": "0x0" 00:18:00.152 }, 00:18:00.152 "bdev": { 00:18:00.152 "mask": "0x8", 00:18:00.152 "tpoint_mask": "0xffffffffffffffff" 00:18:00.152 }, 00:18:00.152 "nvmf_rdma": { 00:18:00.152 "mask": "0x10", 00:18:00.152 "tpoint_mask": "0x0" 00:18:00.152 }, 00:18:00.152 "nvmf_tcp": { 00:18:00.152 "mask": "0x20", 00:18:00.152 "tpoint_mask": "0x0" 00:18:00.152 }, 00:18:00.152 "ftl": { 00:18:00.152 "mask": "0x40", 00:18:00.152 "tpoint_mask": "0x0" 00:18:00.152 }, 00:18:00.152 "blobfs": { 00:18:00.152 "mask": "0x80", 00:18:00.152 "tpoint_mask": "0x0" 00:18:00.152 }, 00:18:00.152 "dsa": { 00:18:00.152 "mask": "0x200", 00:18:00.152 "tpoint_mask": "0x0" 00:18:00.152 }, 00:18:00.152 "thread": { 00:18:00.152 "mask": "0x400", 00:18:00.152 "tpoint_mask": "0x0" 00:18:00.152 }, 00:18:00.152 "nvme_pcie": { 00:18:00.152 "mask": "0x800", 00:18:00.152 "tpoint_mask": "0x0" 00:18:00.152 }, 00:18:00.152 "iaa": { 00:18:00.152 "mask": "0x1000", 00:18:00.152 "tpoint_mask": "0x0" 00:18:00.152 }, 00:18:00.152 "nvme_tcp": { 00:18:00.152 "mask": "0x2000", 00:18:00.152 "tpoint_mask": "0x0" 00:18:00.152 }, 00:18:00.152 "bdev_nvme": { 00:18:00.152 "mask": "0x4000", 00:18:00.152 "tpoint_mask": "0x0" 00:18:00.152 }, 00:18:00.152 "sock": { 00:18:00.152 "mask": "0x8000", 00:18:00.152 "tpoint_mask": "0x0" 00:18:00.152 }, 00:18:00.152 "blob": { 00:18:00.152 "mask": "0x10000", 00:18:00.152 "tpoint_mask": "0x0" 00:18:00.152 }, 00:18:00.152 "bdev_raid": { 00:18:00.152 "mask": "0x20000", 00:18:00.152 "tpoint_mask": "0x0" 00:18:00.152 }, 00:18:00.152 "scheduler": { 00:18:00.152 "mask": "0x40000", 00:18:00.152 "tpoint_mask": "0x0" 00:18:00.152 } 00:18:00.152 }' 00:18:00.152 13:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:18:00.152 13:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:18:00.152 13:38:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:18:00.152 13:38:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:18:00.152 13:38:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:18:00.411 13:38:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:18:00.411 13:38:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:18:00.411 13:38:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:18:00.411 13:38:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:18:00.411 13:38:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:18:00.411 00:18:00.411 real 0m0.279s 00:18:00.411 user 0m0.242s 00:18:00.411 sys 0m0.027s 00:18:00.411 13:38:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.411 13:38:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.411 ************************************ 00:18:00.411 END TEST rpc_trace_cmd_test 00:18:00.411 ************************************ 00:18:00.411 13:38:03 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:18:00.411 13:38:03 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:18:00.411 13:38:03 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:18:00.411 13:38:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:00.411 13:38:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.411 13:38:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:00.411 ************************************ 00:18:00.411 START TEST rpc_daemon_integrity 00:18:00.411 ************************************ 00:18:00.411 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:18:00.411 13:38:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:18:00.411 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.411 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:00.411 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.411 13:38:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:18:00.411 13:38:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:18:00.411 13:38:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:18:00.411 13:38:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:18:00.411 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.411 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:00.411 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.411 13:38:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:18:00.411 13:38:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:18:00.411 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.411 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:00.670 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.670 13:38:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:18:00.670 { 00:18:00.670 "name": "Malloc2", 00:18:00.670 "aliases": [ 00:18:00.670 "578ed52c-b6ea-4e1d-8bab-ed791909953e" 00:18:00.670 ], 00:18:00.670 "product_name": "Malloc disk", 00:18:00.670 "block_size": 512, 00:18:00.670 "num_blocks": 16384, 00:18:00.670 "uuid": "578ed52c-b6ea-4e1d-8bab-ed791909953e", 00:18:00.670 "assigned_rate_limits": { 00:18:00.670 "rw_ios_per_sec": 0, 00:18:00.670 "rw_mbytes_per_sec": 0, 00:18:00.670 "r_mbytes_per_sec": 0, 00:18:00.670 "w_mbytes_per_sec": 0 00:18:00.670 }, 00:18:00.670 "claimed": false, 00:18:00.670 "zoned": false, 00:18:00.670 "supported_io_types": { 00:18:00.670 "read": true, 00:18:00.670 "write": true, 00:18:00.670 "unmap": true, 00:18:00.670 "flush": true, 00:18:00.670 "reset": true, 00:18:00.670 "nvme_admin": false, 00:18:00.670 "nvme_io": false, 00:18:00.670 "nvme_io_md": false, 00:18:00.670 "write_zeroes": true, 00:18:00.670 "zcopy": true, 00:18:00.670 "get_zone_info": false, 00:18:00.670 "zone_management": false, 00:18:00.670 "zone_append": false, 00:18:00.670 "compare": false, 00:18:00.670 "compare_and_write": false, 00:18:00.670 "abort": true, 00:18:00.670 "seek_hole": false, 00:18:00.670 "seek_data": false, 00:18:00.670 "copy": true, 00:18:00.670 "nvme_iov_md": false 00:18:00.670 }, 00:18:00.670 "memory_domains": [ 00:18:00.670 { 00:18:00.670 "dma_device_id": "system", 00:18:00.670 "dma_device_type": 1 00:18:00.670 }, 00:18:00.670 { 00:18:00.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.670 "dma_device_type": 2 00:18:00.670 } 00:18:00.670 ], 00:18:00.670 "driver_specific": {} 00:18:00.670 } 00:18:00.670 ]' 00:18:00.670 13:38:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:18:00.670 13:38:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:18:00.670 13:38:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:18:00.670 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.670 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:00.670 [2024-11-20 13:38:03.391708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:18:00.670 [2024-11-20 13:38:03.391798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.670 [2024-11-20 13:38:03.391833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:00.670 [2024-11-20 13:38:03.391852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.670 [2024-11-20 13:38:03.394962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.670 [2024-11-20 13:38:03.395014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:18:00.670 Passthru0 00:18:00.670 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.670 13:38:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:18:00.670 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.670 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:00.670 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.670 13:38:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:18:00.670 { 00:18:00.670 "name": "Malloc2", 00:18:00.670 "aliases": [ 00:18:00.670 "578ed52c-b6ea-4e1d-8bab-ed791909953e" 00:18:00.670 ], 00:18:00.670 "product_name": "Malloc disk", 00:18:00.670 "block_size": 512, 00:18:00.670 "num_blocks": 16384, 00:18:00.670 "uuid": "578ed52c-b6ea-4e1d-8bab-ed791909953e", 00:18:00.670 "assigned_rate_limits": { 00:18:00.670 "rw_ios_per_sec": 0, 00:18:00.670 "rw_mbytes_per_sec": 0, 00:18:00.670 "r_mbytes_per_sec": 0, 00:18:00.670 "w_mbytes_per_sec": 0 00:18:00.670 }, 00:18:00.670 "claimed": true, 00:18:00.670 "claim_type": "exclusive_write", 00:18:00.670 "zoned": false, 00:18:00.670 "supported_io_types": { 00:18:00.670 "read": true, 00:18:00.670 "write": true, 00:18:00.670 "unmap": true, 00:18:00.670 "flush": true, 00:18:00.670 "reset": true, 00:18:00.670 "nvme_admin": false, 00:18:00.670 "nvme_io": false, 00:18:00.670 "nvme_io_md": false, 00:18:00.670 "write_zeroes": true, 00:18:00.670 "zcopy": true, 00:18:00.670 "get_zone_info": false, 00:18:00.670 "zone_management": false, 00:18:00.670 "zone_append": false, 00:18:00.670 "compare": false, 00:18:00.670 "compare_and_write": false, 00:18:00.670 "abort": true, 00:18:00.670 "seek_hole": false, 00:18:00.670 "seek_data": false, 00:18:00.670 "copy": true, 00:18:00.670 "nvme_iov_md": false 00:18:00.670 }, 00:18:00.670 "memory_domains": [ 00:18:00.670 { 00:18:00.670 "dma_device_id": "system", 00:18:00.670 "dma_device_type": 1 00:18:00.670 }, 00:18:00.670 { 00:18:00.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.670 "dma_device_type": 2 00:18:00.670 } 00:18:00.670 ], 00:18:00.670 "driver_specific": {} 00:18:00.670 }, 00:18:00.670 { 00:18:00.670 "name": "Passthru0", 00:18:00.670 "aliases": [ 00:18:00.670 "a78fe130-aa4c-53a1-a47e-26890516c3ef" 00:18:00.670 ], 00:18:00.670 "product_name": "passthru", 00:18:00.670 "block_size": 512, 00:18:00.670 "num_blocks": 16384, 00:18:00.670 "uuid": "a78fe130-aa4c-53a1-a47e-26890516c3ef", 00:18:00.670 "assigned_rate_limits": { 00:18:00.670 "rw_ios_per_sec": 0, 00:18:00.670 "rw_mbytes_per_sec": 0, 00:18:00.670 "r_mbytes_per_sec": 0, 00:18:00.670 "w_mbytes_per_sec": 0 00:18:00.670 }, 00:18:00.670 "claimed": false, 00:18:00.670 "zoned": false, 00:18:00.670 "supported_io_types": { 00:18:00.670 "read": true, 00:18:00.670 "write": true, 00:18:00.670 "unmap": true, 00:18:00.670 "flush": true, 00:18:00.670 "reset": true, 00:18:00.670 "nvme_admin": false, 00:18:00.670 "nvme_io": false, 00:18:00.670 "nvme_io_md": false, 00:18:00.670 "write_zeroes": true, 00:18:00.670 "zcopy": true, 00:18:00.670 "get_zone_info": false, 00:18:00.670 "zone_management": false, 00:18:00.670 "zone_append": false, 00:18:00.670 "compare": false, 00:18:00.670 "compare_and_write": false, 00:18:00.670 "abort": true, 00:18:00.670 "seek_hole": false, 00:18:00.670 "seek_data": false, 00:18:00.670 "copy": true, 00:18:00.670 "nvme_iov_md": false 00:18:00.670 }, 00:18:00.670 "memory_domains": [ 00:18:00.670 { 00:18:00.670 "dma_device_id": "system", 00:18:00.670 "dma_device_type": 1 00:18:00.670 }, 00:18:00.670 { 00:18:00.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.671 "dma_device_type": 2 00:18:00.671 } 00:18:00.671 ], 00:18:00.671 "driver_specific": { 00:18:00.671 "passthru": { 00:18:00.671 "name": "Passthru0", 00:18:00.671 "base_bdev_name": "Malloc2" 00:18:00.671 } 00:18:00.671 } 00:18:00.671 } 00:18:00.671 ]' 00:18:00.671 13:38:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:18:00.671 13:38:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:18:00.671 13:38:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:18:00.671 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.671 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:00.671 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.671 13:38:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:18:00.671 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.671 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:00.671 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.671 13:38:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:00.671 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.671 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:00.671 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.671 13:38:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:18:00.671 13:38:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:18:00.929 13:38:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:18:00.929 00:18:00.929 real 0m0.378s 00:18:00.929 user 0m0.231s 00:18:00.929 sys 0m0.039s 00:18:00.929 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.929 13:38:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:18:00.929 ************************************ 00:18:00.929 END TEST rpc_daemon_integrity 00:18:00.929 ************************************ 00:18:00.929 13:38:03 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:00.929 13:38:03 rpc -- rpc/rpc.sh@84 -- # killprocess 57002 00:18:00.929 13:38:03 rpc -- common/autotest_common.sh@954 -- # '[' -z 57002 ']' 00:18:00.929 13:38:03 rpc -- common/autotest_common.sh@958 -- # kill -0 57002 00:18:00.929 13:38:03 rpc -- common/autotest_common.sh@959 -- # uname 00:18:00.929 13:38:03 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.929 13:38:03 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57002 00:18:00.929 13:38:03 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:00.929 13:38:03 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:00.929 killing process with pid 57002 00:18:00.929 13:38:03 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57002' 00:18:00.929 13:38:03 rpc -- common/autotest_common.sh@973 -- # kill 57002 00:18:00.929 13:38:03 rpc -- common/autotest_common.sh@978 -- # wait 57002 00:18:03.531 00:18:03.531 real 0m5.274s 00:18:03.531 user 0m6.019s 00:18:03.531 sys 0m0.905s 00:18:03.531 13:38:05 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:03.531 ************************************ 00:18:03.531 END TEST rpc 00:18:03.531 ************************************ 00:18:03.531 13:38:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:18:03.531 13:38:05 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:18:03.531 13:38:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:03.531 13:38:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:03.531 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:18:03.531 ************************************ 00:18:03.531 START TEST skip_rpc 00:18:03.531 ************************************ 00:18:03.531 13:38:05 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:18:03.531 * Looking for test storage... 00:18:03.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:18:03.531 13:38:06 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:03.531 13:38:06 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:18:03.531 13:38:06 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:03.531 13:38:06 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@345 -- # : 1 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:03.531 13:38:06 skip_rpc -- scripts/common.sh@368 -- # return 0 00:18:03.531 13:38:06 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:03.531 13:38:06 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:03.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.531 --rc genhtml_branch_coverage=1 00:18:03.531 --rc genhtml_function_coverage=1 00:18:03.531 --rc genhtml_legend=1 00:18:03.531 --rc geninfo_all_blocks=1 00:18:03.531 --rc geninfo_unexecuted_blocks=1 00:18:03.531 00:18:03.531 ' 00:18:03.531 13:38:06 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:03.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.531 --rc genhtml_branch_coverage=1 00:18:03.531 --rc genhtml_function_coverage=1 00:18:03.532 --rc genhtml_legend=1 00:18:03.532 --rc geninfo_all_blocks=1 00:18:03.532 --rc geninfo_unexecuted_blocks=1 00:18:03.532 00:18:03.532 ' 00:18:03.532 13:38:06 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:03.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.532 --rc genhtml_branch_coverage=1 00:18:03.532 --rc genhtml_function_coverage=1 00:18:03.532 --rc genhtml_legend=1 00:18:03.532 --rc geninfo_all_blocks=1 00:18:03.532 --rc geninfo_unexecuted_blocks=1 00:18:03.532 00:18:03.532 ' 00:18:03.532 13:38:06 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:03.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.532 --rc genhtml_branch_coverage=1 00:18:03.532 --rc genhtml_function_coverage=1 00:18:03.532 --rc genhtml_legend=1 00:18:03.532 --rc geninfo_all_blocks=1 00:18:03.532 --rc geninfo_unexecuted_blocks=1 00:18:03.532 00:18:03.532 ' 00:18:03.532 13:38:06 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:18:03.532 13:38:06 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:18:03.532 13:38:06 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:18:03.532 13:38:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:03.532 13:38:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:03.532 13:38:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:03.532 ************************************ 00:18:03.532 START TEST skip_rpc 00:18:03.532 ************************************ 00:18:03.532 13:38:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:18:03.532 13:38:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57232 00:18:03.532 13:38:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:18:03.532 13:38:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:18:03.532 13:38:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:18:03.532 [2024-11-20 13:38:06.255918] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:18:03.532 [2024-11-20 13:38:06.256105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57232 ] 00:18:03.532 [2024-11-20 13:38:06.441019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.790 [2024-11-20 13:38:06.572863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57232 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57232 ']' 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57232 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57232 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57232' 00:18:09.059 killing process with pid 57232 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57232 00:18:09.059 13:38:11 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57232 00:18:10.502 00:18:10.502 real 0m7.284s 00:18:10.502 user 0m6.724s 00:18:10.502 sys 0m0.455s 00:18:10.502 13:38:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:10.502 13:38:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.502 ************************************ 00:18:10.502 END TEST skip_rpc 00:18:10.502 ************************************ 00:18:10.761 13:38:13 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:18:10.761 13:38:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:10.761 13:38:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:10.761 13:38:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.761 ************************************ 00:18:10.761 START TEST skip_rpc_with_json 00:18:10.761 ************************************ 00:18:10.761 13:38:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:18:10.761 13:38:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:18:10.761 13:38:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57336 00:18:10.761 13:38:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:18:10.761 13:38:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57336 00:18:10.761 13:38:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:10.761 13:38:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57336 ']' 00:18:10.761 13:38:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.761 13:38:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.761 13:38:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.761 13:38:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.761 13:38:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:18:10.761 [2024-11-20 13:38:13.614160] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:18:10.761 [2024-11-20 13:38:13.614321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57336 ] 00:18:11.022 [2024-11-20 13:38:13.788781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.022 [2024-11-20 13:38:13.921376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.958 13:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.958 13:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:18:11.958 13:38:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:18:11.958 13:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.958 13:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:18:11.958 [2024-11-20 13:38:14.800188] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:18:11.958 request: 00:18:11.958 { 00:18:11.958 "trtype": "tcp", 00:18:11.958 "method": "nvmf_get_transports", 00:18:11.958 "req_id": 1 00:18:11.958 } 00:18:11.958 Got JSON-RPC error response 00:18:11.958 response: 00:18:11.958 { 00:18:11.958 "code": -19, 00:18:11.958 "message": "No such device" 00:18:11.958 } 00:18:11.958 13:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:11.958 13:38:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:18:11.958 13:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.958 13:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:18:11.958 [2024-11-20 13:38:14.812350] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.958 13:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.958 13:38:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:18:11.958 13:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.958 13:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:18:12.217 13:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.217 13:38:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:18:12.217 { 00:18:12.217 "subsystems": [ 00:18:12.217 { 00:18:12.217 "subsystem": "fsdev", 00:18:12.217 "config": [ 00:18:12.217 { 00:18:12.217 "method": "fsdev_set_opts", 00:18:12.217 "params": { 00:18:12.217 "fsdev_io_pool_size": 65535, 00:18:12.217 "fsdev_io_cache_size": 256 00:18:12.217 } 00:18:12.217 } 00:18:12.217 ] 00:18:12.217 }, 00:18:12.217 { 00:18:12.217 "subsystem": "keyring", 00:18:12.217 "config": [] 00:18:12.217 }, 00:18:12.217 { 00:18:12.217 "subsystem": "iobuf", 00:18:12.217 "config": [ 00:18:12.217 { 00:18:12.217 "method": "iobuf_set_options", 00:18:12.217 "params": { 00:18:12.217 "small_pool_count": 8192, 00:18:12.217 "large_pool_count": 1024, 00:18:12.217 "small_bufsize": 8192, 00:18:12.217 "large_bufsize": 135168, 00:18:12.217 "enable_numa": false 00:18:12.217 } 00:18:12.217 } 00:18:12.217 ] 00:18:12.217 }, 00:18:12.217 { 00:18:12.217 "subsystem": "sock", 00:18:12.217 "config": [ 00:18:12.217 { 00:18:12.217 "method": "sock_set_default_impl", 00:18:12.217 "params": { 00:18:12.217 "impl_name": "posix" 00:18:12.217 } 00:18:12.217 }, 00:18:12.217 { 00:18:12.217 "method": "sock_impl_set_options", 00:18:12.217 "params": { 00:18:12.217 "impl_name": "ssl", 00:18:12.217 "recv_buf_size": 4096, 00:18:12.217 "send_buf_size": 4096, 00:18:12.217 "enable_recv_pipe": true, 00:18:12.217 "enable_quickack": false, 00:18:12.217 "enable_placement_id": 0, 00:18:12.217 "enable_zerocopy_send_server": true, 00:18:12.217 "enable_zerocopy_send_client": false, 00:18:12.217 "zerocopy_threshold": 0, 00:18:12.217 "tls_version": 0, 00:18:12.217 "enable_ktls": false 00:18:12.217 } 00:18:12.217 }, 00:18:12.217 { 00:18:12.217 "method": "sock_impl_set_options", 00:18:12.217 "params": { 00:18:12.217 "impl_name": "posix", 00:18:12.217 "recv_buf_size": 2097152, 00:18:12.217 "send_buf_size": 2097152, 00:18:12.217 "enable_recv_pipe": true, 00:18:12.217 "enable_quickack": false, 00:18:12.217 "enable_placement_id": 0, 00:18:12.217 "enable_zerocopy_send_server": true, 00:18:12.217 "enable_zerocopy_send_client": false, 00:18:12.217 "zerocopy_threshold": 0, 00:18:12.217 "tls_version": 0, 00:18:12.217 "enable_ktls": false 00:18:12.217 } 00:18:12.217 } 00:18:12.217 ] 00:18:12.217 }, 00:18:12.217 { 00:18:12.217 "subsystem": "vmd", 00:18:12.217 "config": [] 00:18:12.217 }, 00:18:12.217 { 00:18:12.217 "subsystem": "accel", 00:18:12.217 "config": [ 00:18:12.217 { 00:18:12.217 "method": "accel_set_options", 00:18:12.217 "params": { 00:18:12.217 "small_cache_size": 128, 00:18:12.217 "large_cache_size": 16, 00:18:12.217 "task_count": 2048, 00:18:12.217 "sequence_count": 2048, 00:18:12.217 "buf_count": 2048 00:18:12.217 } 00:18:12.217 } 00:18:12.217 ] 00:18:12.217 }, 00:18:12.217 { 00:18:12.217 "subsystem": "bdev", 00:18:12.217 "config": [ 00:18:12.217 { 00:18:12.217 "method": "bdev_set_options", 00:18:12.217 "params": { 00:18:12.217 "bdev_io_pool_size": 65535, 00:18:12.217 "bdev_io_cache_size": 256, 00:18:12.217 "bdev_auto_examine": true, 00:18:12.217 "iobuf_small_cache_size": 128, 00:18:12.217 "iobuf_large_cache_size": 16 00:18:12.217 } 00:18:12.217 }, 00:18:12.217 { 00:18:12.217 "method": "bdev_raid_set_options", 00:18:12.217 "params": { 00:18:12.217 "process_window_size_kb": 1024, 00:18:12.217 "process_max_bandwidth_mb_sec": 0 00:18:12.217 } 00:18:12.217 }, 00:18:12.217 { 00:18:12.217 "method": "bdev_iscsi_set_options", 00:18:12.217 "params": { 00:18:12.217 "timeout_sec": 30 00:18:12.217 } 00:18:12.217 }, 00:18:12.217 { 00:18:12.217 "method": "bdev_nvme_set_options", 00:18:12.217 "params": { 00:18:12.217 "action_on_timeout": "none", 00:18:12.217 "timeout_us": 0, 00:18:12.217 "timeout_admin_us": 0, 00:18:12.217 "keep_alive_timeout_ms": 10000, 00:18:12.217 "arbitration_burst": 0, 00:18:12.217 "low_priority_weight": 0, 00:18:12.217 "medium_priority_weight": 0, 00:18:12.217 "high_priority_weight": 0, 00:18:12.217 "nvme_adminq_poll_period_us": 10000, 00:18:12.217 "nvme_ioq_poll_period_us": 0, 00:18:12.217 "io_queue_requests": 0, 00:18:12.217 "delay_cmd_submit": true, 00:18:12.217 "transport_retry_count": 4, 00:18:12.217 "bdev_retry_count": 3, 00:18:12.217 "transport_ack_timeout": 0, 00:18:12.217 "ctrlr_loss_timeout_sec": 0, 00:18:12.217 "reconnect_delay_sec": 0, 00:18:12.217 "fast_io_fail_timeout_sec": 0, 00:18:12.217 "disable_auto_failback": false, 00:18:12.217 "generate_uuids": false, 00:18:12.217 "transport_tos": 0, 00:18:12.217 "nvme_error_stat": false, 00:18:12.217 "rdma_srq_size": 0, 00:18:12.217 "io_path_stat": false, 00:18:12.217 "allow_accel_sequence": false, 00:18:12.217 "rdma_max_cq_size": 0, 00:18:12.217 "rdma_cm_event_timeout_ms": 0, 00:18:12.217 "dhchap_digests": [ 00:18:12.217 "sha256", 00:18:12.217 "sha384", 00:18:12.217 "sha512" 00:18:12.217 ], 00:18:12.217 "dhchap_dhgroups": [ 00:18:12.217 "null", 00:18:12.217 "ffdhe2048", 00:18:12.217 "ffdhe3072", 00:18:12.217 "ffdhe4096", 00:18:12.217 "ffdhe6144", 00:18:12.217 "ffdhe8192" 00:18:12.217 ] 00:18:12.217 } 00:18:12.217 }, 00:18:12.217 { 00:18:12.217 "method": "bdev_nvme_set_hotplug", 00:18:12.217 "params": { 00:18:12.217 "period_us": 100000, 00:18:12.217 "enable": false 00:18:12.217 } 00:18:12.217 }, 00:18:12.217 { 00:18:12.217 "method": "bdev_wait_for_examine" 00:18:12.217 } 00:18:12.217 ] 00:18:12.217 }, 00:18:12.217 { 00:18:12.217 "subsystem": "scsi", 00:18:12.217 "config": null 00:18:12.217 }, 00:18:12.217 { 00:18:12.217 "subsystem": "scheduler", 00:18:12.217 "config": [ 00:18:12.217 { 00:18:12.217 "method": "framework_set_scheduler", 00:18:12.217 "params": { 00:18:12.217 "name": "static" 00:18:12.217 } 00:18:12.217 } 00:18:12.217 ] 00:18:12.217 }, 00:18:12.217 { 00:18:12.217 "subsystem": "vhost_scsi", 00:18:12.217 "config": [] 00:18:12.217 }, 00:18:12.217 { 00:18:12.217 "subsystem": "vhost_blk", 00:18:12.217 "config": [] 00:18:12.217 }, 00:18:12.217 { 00:18:12.217 "subsystem": "ublk", 00:18:12.217 "config": [] 00:18:12.217 }, 00:18:12.217 { 00:18:12.217 "subsystem": "nbd", 00:18:12.217 "config": [] 00:18:12.217 }, 00:18:12.217 { 00:18:12.217 "subsystem": "nvmf", 00:18:12.217 "config": [ 00:18:12.217 { 00:18:12.217 "method": "nvmf_set_config", 00:18:12.217 "params": { 00:18:12.217 "discovery_filter": "match_any", 00:18:12.217 "admin_cmd_passthru": { 00:18:12.217 "identify_ctrlr": false 00:18:12.217 }, 00:18:12.217 "dhchap_digests": [ 00:18:12.217 "sha256", 00:18:12.217 "sha384", 00:18:12.217 "sha512" 00:18:12.217 ], 00:18:12.217 "dhchap_dhgroups": [ 00:18:12.217 "null", 00:18:12.217 "ffdhe2048", 00:18:12.217 "ffdhe3072", 00:18:12.217 "ffdhe4096", 00:18:12.217 "ffdhe6144", 00:18:12.217 "ffdhe8192" 00:18:12.217 ] 00:18:12.217 } 00:18:12.217 }, 00:18:12.217 { 00:18:12.217 "method": "nvmf_set_max_subsystems", 00:18:12.217 "params": { 00:18:12.217 "max_subsystems": 1024 00:18:12.217 } 00:18:12.217 }, 00:18:12.217 { 00:18:12.217 "method": "nvmf_set_crdt", 00:18:12.217 "params": { 00:18:12.217 "crdt1": 0, 00:18:12.217 "crdt2": 0, 00:18:12.217 "crdt3": 0 00:18:12.217 } 00:18:12.217 }, 00:18:12.217 { 00:18:12.217 "method": "nvmf_create_transport", 00:18:12.217 "params": { 00:18:12.218 "trtype": "TCP", 00:18:12.218 "max_queue_depth": 128, 00:18:12.218 "max_io_qpairs_per_ctrlr": 127, 00:18:12.218 "in_capsule_data_size": 4096, 00:18:12.218 "max_io_size": 131072, 00:18:12.218 "io_unit_size": 131072, 00:18:12.218 "max_aq_depth": 128, 00:18:12.218 "num_shared_buffers": 511, 00:18:12.218 "buf_cache_size": 4294967295, 00:18:12.218 "dif_insert_or_strip": false, 00:18:12.218 "zcopy": false, 00:18:12.218 "c2h_success": true, 00:18:12.218 "sock_priority": 0, 00:18:12.218 "abort_timeout_sec": 1, 00:18:12.218 "ack_timeout": 0, 00:18:12.218 "data_wr_pool_size": 0 00:18:12.218 } 00:18:12.218 } 00:18:12.218 ] 00:18:12.218 }, 00:18:12.218 { 00:18:12.218 "subsystem": "iscsi", 00:18:12.218 "config": [ 00:18:12.218 { 00:18:12.218 "method": "iscsi_set_options", 00:18:12.218 "params": { 00:18:12.218 "node_base": "iqn.2016-06.io.spdk", 00:18:12.218 "max_sessions": 128, 00:18:12.218 "max_connections_per_session": 2, 00:18:12.218 "max_queue_depth": 64, 00:18:12.218 "default_time2wait": 2, 00:18:12.218 "default_time2retain": 20, 00:18:12.218 "first_burst_length": 8192, 00:18:12.218 "immediate_data": true, 00:18:12.218 "allow_duplicated_isid": false, 00:18:12.218 "error_recovery_level": 0, 00:18:12.218 "nop_timeout": 60, 00:18:12.218 "nop_in_interval": 30, 00:18:12.218 "disable_chap": false, 00:18:12.218 "require_chap": false, 00:18:12.218 "mutual_chap": false, 00:18:12.218 "chap_group": 0, 00:18:12.218 "max_large_datain_per_connection": 64, 00:18:12.218 "max_r2t_per_connection": 4, 00:18:12.218 "pdu_pool_size": 36864, 00:18:12.218 "immediate_data_pool_size": 16384, 00:18:12.218 "data_out_pool_size": 2048 00:18:12.218 } 00:18:12.218 } 00:18:12.218 ] 00:18:12.218 } 00:18:12.218 ] 00:18:12.218 } 00:18:12.218 13:38:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:18:12.218 13:38:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57336 00:18:12.218 13:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57336 ']' 00:18:12.218 13:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57336 00:18:12.218 13:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:18:12.218 13:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.218 13:38:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57336 00:18:12.218 13:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:12.218 killing process with pid 57336 00:18:12.218 13:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:12.218 13:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57336' 00:18:12.218 13:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57336 00:18:12.218 13:38:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57336 00:18:14.853 13:38:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57387 00:18:14.853 13:38:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:18:14.853 13:38:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:18:20.121 13:38:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57387 00:18:20.121 13:38:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57387 ']' 00:18:20.121 13:38:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57387 00:18:20.121 13:38:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:18:20.121 13:38:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.121 13:38:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57387 00:18:20.121 13:38:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:20.121 13:38:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:20.121 killing process with pid 57387 00:18:20.121 13:38:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57387' 00:18:20.121 13:38:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57387 00:18:20.121 13:38:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57387 00:18:22.025 13:38:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:18:22.025 13:38:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:18:22.025 00:18:22.025 real 0m11.051s 00:18:22.025 user 0m10.433s 00:18:22.025 sys 0m1.015s 00:18:22.025 13:38:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.025 13:38:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:18:22.025 ************************************ 00:18:22.025 END TEST skip_rpc_with_json 00:18:22.025 ************************************ 00:18:22.025 13:38:24 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:18:22.025 13:38:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:22.025 13:38:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:22.025 13:38:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.025 ************************************ 00:18:22.025 START TEST skip_rpc_with_delay 00:18:22.025 ************************************ 00:18:22.025 13:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:18:22.025 13:38:24 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:18:22.025 13:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:18:22.025 13:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:18:22.025 13:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:22.025 13:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.025 13:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:22.025 13:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.025 13:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:22.025 13:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.025 13:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:22.025 13:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:18:22.025 13:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:18:22.025 [2024-11-20 13:38:24.673451] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:18:22.025 13:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:18:22.025 13:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:22.025 13:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:22.025 13:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:22.025 00:18:22.025 real 0m0.173s 00:18:22.025 user 0m0.092s 00:18:22.026 sys 0m0.079s 00:18:22.026 13:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.026 13:38:24 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:18:22.026 ************************************ 00:18:22.026 END TEST skip_rpc_with_delay 00:18:22.026 ************************************ 00:18:22.026 13:38:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:18:22.026 13:38:24 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:18:22.026 13:38:24 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:18:22.026 13:38:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:22.026 13:38:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:22.026 13:38:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:22.026 ************************************ 00:18:22.026 START TEST exit_on_failed_rpc_init 00:18:22.026 ************************************ 00:18:22.026 13:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:18:22.026 13:38:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57520 00:18:22.026 13:38:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57520 00:18:22.026 13:38:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:22.026 13:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57520 ']' 00:18:22.026 13:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.026 13:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.026 13:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.026 13:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.026 13:38:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:18:22.026 [2024-11-20 13:38:24.908322] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:18:22.026 [2024-11-20 13:38:24.908491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57520 ] 00:18:22.285 [2024-11-20 13:38:25.081970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.544 [2024-11-20 13:38:25.216070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.513 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.513 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:18:23.513 13:38:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:18:23.513 13:38:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:18:23.513 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:18:23.513 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:18:23.513 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:23.513 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.513 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:23.513 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.513 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:23.513 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.513 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:23.513 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:18:23.513 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:18:23.513 [2024-11-20 13:38:26.241147] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:18:23.513 [2024-11-20 13:38:26.241887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57544 ] 00:18:23.772 [2024-11-20 13:38:26.429254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.772 [2024-11-20 13:38:26.563101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.772 [2024-11-20 13:38:26.563214] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:23.772 [2024-11-20 13:38:26.563237] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:23.772 [2024-11-20 13:38:26.563253] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:24.030 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:18:24.030 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:24.030 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:18:24.030 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:18:24.030 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:18:24.030 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:24.030 13:38:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:18:24.030 13:38:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57520 00:18:24.030 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57520 ']' 00:18:24.030 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57520 00:18:24.030 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:18:24.030 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.030 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57520 00:18:24.030 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:24.030 killing process with pid 57520 00:18:24.030 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:24.030 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57520' 00:18:24.030 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57520 00:18:24.030 13:38:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57520 00:18:26.561 00:18:26.561 real 0m4.339s 00:18:26.561 user 0m4.754s 00:18:26.561 sys 0m0.690s 00:18:26.561 13:38:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.561 13:38:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:18:26.561 ************************************ 00:18:26.561 END TEST exit_on_failed_rpc_init 00:18:26.561 ************************************ 00:18:26.561 13:38:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:18:26.561 00:18:26.561 real 0m23.226s 00:18:26.561 user 0m22.185s 00:18:26.561 sys 0m2.435s 00:18:26.561 13:38:29 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.561 13:38:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:26.561 ************************************ 00:18:26.561 END TEST skip_rpc 00:18:26.561 ************************************ 00:18:26.561 13:38:29 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:18:26.561 13:38:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:26.561 13:38:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.561 13:38:29 -- common/autotest_common.sh@10 -- # set +x 00:18:26.561 ************************************ 00:18:26.561 START TEST rpc_client 00:18:26.561 ************************************ 00:18:26.561 13:38:29 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:18:26.561 * Looking for test storage... 00:18:26.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:18:26.561 13:38:29 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:26.561 13:38:29 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:18:26.561 13:38:29 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:26.561 13:38:29 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@345 -- # : 1 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@353 -- # local d=1 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@355 -- # echo 1 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@353 -- # local d=2 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@355 -- # echo 2 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:26.561 13:38:29 rpc_client -- scripts/common.sh@368 -- # return 0 00:18:26.561 13:38:29 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:26.561 13:38:29 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:26.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.561 --rc genhtml_branch_coverage=1 00:18:26.561 --rc genhtml_function_coverage=1 00:18:26.561 --rc genhtml_legend=1 00:18:26.561 --rc geninfo_all_blocks=1 00:18:26.561 --rc geninfo_unexecuted_blocks=1 00:18:26.561 00:18:26.561 ' 00:18:26.561 13:38:29 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:26.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.561 --rc genhtml_branch_coverage=1 00:18:26.561 --rc genhtml_function_coverage=1 00:18:26.561 --rc genhtml_legend=1 00:18:26.561 --rc geninfo_all_blocks=1 00:18:26.561 --rc geninfo_unexecuted_blocks=1 00:18:26.561 00:18:26.561 ' 00:18:26.561 13:38:29 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:26.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.561 --rc genhtml_branch_coverage=1 00:18:26.561 --rc genhtml_function_coverage=1 00:18:26.561 --rc genhtml_legend=1 00:18:26.561 --rc geninfo_all_blocks=1 00:18:26.561 --rc geninfo_unexecuted_blocks=1 00:18:26.561 00:18:26.561 ' 00:18:26.561 13:38:29 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:26.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.561 --rc genhtml_branch_coverage=1 00:18:26.561 --rc genhtml_function_coverage=1 00:18:26.561 --rc genhtml_legend=1 00:18:26.561 --rc geninfo_all_blocks=1 00:18:26.561 --rc geninfo_unexecuted_blocks=1 00:18:26.561 00:18:26.561 ' 00:18:26.561 13:38:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:18:26.561 OK 00:18:26.820 13:38:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:18:26.820 00:18:26.820 real 0m0.256s 00:18:26.820 user 0m0.160s 00:18:26.820 sys 0m0.106s 00:18:26.820 13:38:29 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.820 13:38:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:18:26.820 ************************************ 00:18:26.820 END TEST rpc_client 00:18:26.820 ************************************ 00:18:26.820 13:38:29 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:18:26.820 13:38:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:26.820 13:38:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.820 13:38:29 -- common/autotest_common.sh@10 -- # set +x 00:18:26.820 ************************************ 00:18:26.820 START TEST json_config 00:18:26.820 ************************************ 00:18:26.820 13:38:29 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:18:26.820 13:38:29 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:26.820 13:38:29 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:18:26.820 13:38:29 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:26.820 13:38:29 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:26.821 13:38:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:26.821 13:38:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:26.821 13:38:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:26.821 13:38:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:18:26.821 13:38:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:18:26.821 13:38:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:18:26.821 13:38:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:18:26.821 13:38:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:18:26.821 13:38:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:18:26.821 13:38:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:18:26.821 13:38:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:26.821 13:38:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:18:26.821 13:38:29 json_config -- scripts/common.sh@345 -- # : 1 00:18:26.821 13:38:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:26.821 13:38:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.821 13:38:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:18:26.821 13:38:29 json_config -- scripts/common.sh@353 -- # local d=1 00:18:26.821 13:38:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:26.821 13:38:29 json_config -- scripts/common.sh@355 -- # echo 1 00:18:26.821 13:38:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:18:26.821 13:38:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:18:26.821 13:38:29 json_config -- scripts/common.sh@353 -- # local d=2 00:18:26.821 13:38:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:26.821 13:38:29 json_config -- scripts/common.sh@355 -- # echo 2 00:18:26.821 13:38:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:18:26.821 13:38:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:26.821 13:38:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:26.821 13:38:29 json_config -- scripts/common.sh@368 -- # return 0 00:18:26.821 13:38:29 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:26.821 13:38:29 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:26.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.821 --rc genhtml_branch_coverage=1 00:18:26.821 --rc genhtml_function_coverage=1 00:18:26.821 --rc genhtml_legend=1 00:18:26.821 --rc geninfo_all_blocks=1 00:18:26.821 --rc geninfo_unexecuted_blocks=1 00:18:26.821 00:18:26.821 ' 00:18:26.821 13:38:29 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:26.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.821 --rc genhtml_branch_coverage=1 00:18:26.821 --rc genhtml_function_coverage=1 00:18:26.821 --rc genhtml_legend=1 00:18:26.821 --rc geninfo_all_blocks=1 00:18:26.821 --rc geninfo_unexecuted_blocks=1 00:18:26.821 00:18:26.821 ' 00:18:26.821 13:38:29 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:26.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.821 --rc genhtml_branch_coverage=1 00:18:26.821 --rc genhtml_function_coverage=1 00:18:26.821 --rc genhtml_legend=1 00:18:26.821 --rc geninfo_all_blocks=1 00:18:26.821 --rc geninfo_unexecuted_blocks=1 00:18:26.821 00:18:26.821 ' 00:18:26.821 13:38:29 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:26.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.821 --rc genhtml_branch_coverage=1 00:18:26.821 --rc genhtml_function_coverage=1 00:18:26.821 --rc genhtml_legend=1 00:18:26.821 --rc geninfo_all_blocks=1 00:18:26.821 --rc geninfo_unexecuted_blocks=1 00:18:26.821 00:18:26.821 ' 00:18:26.821 13:38:29 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e37304f3-121e-4ded-b956-b778f5717116 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=e37304f3-121e-4ded-b956-b778f5717116 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:26.821 13:38:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:18:26.821 13:38:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.821 13:38:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.821 13:38:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.821 13:38:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.821 13:38:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.821 13:38:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.821 13:38:29 json_config -- paths/export.sh@5 -- # export PATH 00:18:26.821 13:38:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@51 -- # : 0 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:26.821 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:26.821 13:38:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:26.821 13:38:29 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:18:26.821 13:38:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:18:26.821 13:38:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:18:26.821 13:38:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:18:26.821 13:38:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:18:26.821 WARNING: No tests are enabled so not running JSON configuration tests 00:18:26.821 13:38:29 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:18:26.821 13:38:29 json_config -- json_config/json_config.sh@28 -- # exit 0 00:18:26.821 00:18:26.821 real 0m0.193s 00:18:26.821 user 0m0.127s 00:18:26.821 sys 0m0.070s 00:18:26.821 13:38:29 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.821 13:38:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:18:26.821 ************************************ 00:18:26.821 END TEST json_config 00:18:26.821 ************************************ 00:18:27.081 13:38:29 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:18:27.082 13:38:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:27.082 13:38:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:27.082 13:38:29 -- common/autotest_common.sh@10 -- # set +x 00:18:27.082 ************************************ 00:18:27.082 START TEST json_config_extra_key 00:18:27.082 ************************************ 00:18:27.082 13:38:29 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:18:27.082 13:38:29 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:27.082 13:38:29 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:18:27.082 13:38:29 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:27.082 13:38:29 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:18:27.082 13:38:29 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:27.082 13:38:29 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:27.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.082 --rc genhtml_branch_coverage=1 00:18:27.082 --rc genhtml_function_coverage=1 00:18:27.082 --rc genhtml_legend=1 00:18:27.082 --rc geninfo_all_blocks=1 00:18:27.082 --rc geninfo_unexecuted_blocks=1 00:18:27.082 00:18:27.082 ' 00:18:27.082 13:38:29 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:27.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.082 --rc genhtml_branch_coverage=1 00:18:27.082 --rc genhtml_function_coverage=1 00:18:27.082 --rc genhtml_legend=1 00:18:27.082 --rc geninfo_all_blocks=1 00:18:27.082 --rc geninfo_unexecuted_blocks=1 00:18:27.082 00:18:27.082 ' 00:18:27.082 13:38:29 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:27.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.082 --rc genhtml_branch_coverage=1 00:18:27.082 --rc genhtml_function_coverage=1 00:18:27.082 --rc genhtml_legend=1 00:18:27.082 --rc geninfo_all_blocks=1 00:18:27.082 --rc geninfo_unexecuted_blocks=1 00:18:27.082 00:18:27.082 ' 00:18:27.082 13:38:29 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:27.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.082 --rc genhtml_branch_coverage=1 00:18:27.082 --rc genhtml_function_coverage=1 00:18:27.082 --rc genhtml_legend=1 00:18:27.082 --rc geninfo_all_blocks=1 00:18:27.082 --rc geninfo_unexecuted_blocks=1 00:18:27.082 00:18:27.082 ' 00:18:27.082 13:38:29 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e37304f3-121e-4ded-b956-b778f5717116 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=e37304f3-121e-4ded-b956-b778f5717116 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:27.082 13:38:29 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:27.082 13:38:29 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.082 13:38:29 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.082 13:38:29 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.082 13:38:29 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:18:27.082 13:38:29 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:27.082 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:27.082 13:38:29 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:27.082 13:38:29 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:18:27.082 13:38:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:18:27.082 13:38:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:18:27.082 13:38:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:18:27.082 13:38:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:18:27.082 13:38:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:18:27.082 13:38:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:18:27.082 13:38:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:18:27.083 13:38:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:18:27.083 13:38:29 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:18:27.083 INFO: launching applications... 00:18:27.083 13:38:29 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:18:27.083 13:38:29 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:18:27.083 13:38:29 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:18:27.083 13:38:29 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:18:27.083 13:38:29 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:18:27.083 13:38:29 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:18:27.083 13:38:29 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:18:27.083 13:38:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:18:27.083 13:38:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:18:27.083 13:38:29 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57748 00:18:27.083 Waiting for target to run... 00:18:27.083 13:38:29 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:18:27.083 13:38:29 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57748 /var/tmp/spdk_tgt.sock 00:18:27.083 13:38:29 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57748 ']' 00:18:27.083 13:38:29 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:18:27.083 13:38:29 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:18:27.083 13:38:29 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:18:27.083 13:38:29 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:18:27.083 13:38:29 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.083 13:38:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:18:27.341 [2024-11-20 13:38:30.074992] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:18:27.341 [2024-11-20 13:38:30.075165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57748 ] 00:18:27.919 [2024-11-20 13:38:30.547437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.920 [2024-11-20 13:38:30.691128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.489 13:38:31 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.489 00:18:28.489 13:38:31 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:18:28.489 13:38:31 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:18:28.489 INFO: shutting down applications... 00:18:28.489 13:38:31 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:18:28.489 13:38:31 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:18:28.490 13:38:31 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:18:28.490 13:38:31 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:18:28.490 13:38:31 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57748 ]] 00:18:28.490 13:38:31 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57748 00:18:28.490 13:38:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:18:28.490 13:38:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:28.490 13:38:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57748 00:18:28.490 13:38:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:18:29.056 13:38:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:18:29.056 13:38:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:29.056 13:38:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57748 00:18:29.056 13:38:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:18:29.622 13:38:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:18:29.622 13:38:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:29.622 13:38:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57748 00:18:29.622 13:38:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:18:30.204 13:38:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:18:30.204 13:38:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:30.204 13:38:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57748 00:18:30.204 13:38:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:18:30.772 13:38:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:18:30.772 13:38:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:30.772 13:38:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57748 00:18:30.772 13:38:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:18:31.030 13:38:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:18:31.030 13:38:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:31.030 13:38:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57748 00:18:31.030 13:38:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:18:31.597 13:38:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:18:31.597 13:38:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:18:31.597 13:38:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57748 00:18:31.597 13:38:34 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:18:31.597 13:38:34 json_config_extra_key -- json_config/common.sh@43 -- # break 00:18:31.597 13:38:34 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:18:31.597 SPDK target shutdown done 00:18:31.597 Success 00:18:31.597 13:38:34 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:18:31.597 13:38:34 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:18:31.597 00:18:31.597 real 0m4.624s 00:18:31.597 user 0m3.981s 00:18:31.597 sys 0m0.645s 00:18:31.597 ************************************ 00:18:31.597 END TEST json_config_extra_key 00:18:31.597 ************************************ 00:18:31.597 13:38:34 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:31.597 13:38:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:18:31.597 13:38:34 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:18:31.597 13:38:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:31.597 13:38:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:31.597 13:38:34 -- common/autotest_common.sh@10 -- # set +x 00:18:31.597 ************************************ 00:18:31.597 START TEST alias_rpc 00:18:31.597 ************************************ 00:18:31.597 13:38:34 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:18:31.855 * Looking for test storage... 00:18:31.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:18:31.855 13:38:34 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:31.855 13:38:34 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:18:31.855 13:38:34 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:31.855 13:38:34 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@345 -- # : 1 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:31.855 13:38:34 alias_rpc -- scripts/common.sh@368 -- # return 0 00:18:31.855 13:38:34 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:31.855 13:38:34 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:31.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.856 --rc genhtml_branch_coverage=1 00:18:31.856 --rc genhtml_function_coverage=1 00:18:31.856 --rc genhtml_legend=1 00:18:31.856 --rc geninfo_all_blocks=1 00:18:31.856 --rc geninfo_unexecuted_blocks=1 00:18:31.856 00:18:31.856 ' 00:18:31.856 13:38:34 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:31.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.856 --rc genhtml_branch_coverage=1 00:18:31.856 --rc genhtml_function_coverage=1 00:18:31.856 --rc genhtml_legend=1 00:18:31.856 --rc geninfo_all_blocks=1 00:18:31.856 --rc geninfo_unexecuted_blocks=1 00:18:31.856 00:18:31.856 ' 00:18:31.856 13:38:34 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:31.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.856 --rc genhtml_branch_coverage=1 00:18:31.856 --rc genhtml_function_coverage=1 00:18:31.856 --rc genhtml_legend=1 00:18:31.856 --rc geninfo_all_blocks=1 00:18:31.856 --rc geninfo_unexecuted_blocks=1 00:18:31.856 00:18:31.856 ' 00:18:31.856 13:38:34 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:31.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.856 --rc genhtml_branch_coverage=1 00:18:31.856 --rc genhtml_function_coverage=1 00:18:31.856 --rc genhtml_legend=1 00:18:31.856 --rc geninfo_all_blocks=1 00:18:31.856 --rc geninfo_unexecuted_blocks=1 00:18:31.856 00:18:31.856 ' 00:18:31.856 13:38:34 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:18:31.856 13:38:34 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57860 00:18:31.856 13:38:34 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57860 00:18:31.856 13:38:34 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:31.856 13:38:34 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57860 ']' 00:18:31.856 13:38:34 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.856 13:38:34 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.856 13:38:34 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.856 13:38:34 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.856 13:38:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:31.856 [2024-11-20 13:38:34.762112] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:18:31.856 [2024-11-20 13:38:34.762526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57860 ] 00:18:32.115 [2024-11-20 13:38:34.946088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.373 [2024-11-20 13:38:35.073634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.307 13:38:35 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.307 13:38:35 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:33.307 13:38:35 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:18:33.565 13:38:36 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57860 00:18:33.565 13:38:36 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57860 ']' 00:18:33.565 13:38:36 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57860 00:18:33.565 13:38:36 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:18:33.565 13:38:36 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.565 13:38:36 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57860 00:18:33.565 killing process with pid 57860 00:18:33.565 13:38:36 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.565 13:38:36 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.565 13:38:36 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57860' 00:18:33.565 13:38:36 alias_rpc -- common/autotest_common.sh@973 -- # kill 57860 00:18:33.565 13:38:36 alias_rpc -- common/autotest_common.sh@978 -- # wait 57860 00:18:36.105 ************************************ 00:18:36.105 END TEST alias_rpc 00:18:36.105 ************************************ 00:18:36.105 00:18:36.105 real 0m4.014s 00:18:36.105 user 0m4.195s 00:18:36.105 sys 0m0.633s 00:18:36.105 13:38:38 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.105 13:38:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:36.105 13:38:38 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:18:36.105 13:38:38 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:18:36.105 13:38:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:36.105 13:38:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:36.105 13:38:38 -- common/autotest_common.sh@10 -- # set +x 00:18:36.105 ************************************ 00:18:36.105 START TEST spdkcli_tcp 00:18:36.105 ************************************ 00:18:36.105 13:38:38 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:18:36.105 * Looking for test storage... 00:18:36.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:36.105 13:38:38 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:36.105 13:38:38 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:36.105 13:38:38 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:18:36.105 13:38:38 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:36.105 13:38:38 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:36.105 13:38:38 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:36.105 13:38:38 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:36.105 13:38:38 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:18:36.105 13:38:38 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:18:36.105 13:38:38 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:18:36.105 13:38:38 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:18:36.105 13:38:38 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:18:36.105 13:38:38 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:18:36.105 13:38:38 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:18:36.105 13:38:38 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:36.105 13:38:38 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:18:36.105 13:38:38 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:18:36.105 13:38:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:36.105 13:38:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.105 13:38:38 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:18:36.105 13:38:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:18:36.105 13:38:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:36.105 13:38:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:18:36.105 13:38:38 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:18:36.106 13:38:38 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:18:36.106 13:38:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:18:36.106 13:38:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:36.106 13:38:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:18:36.106 13:38:38 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:18:36.106 13:38:38 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:36.106 13:38:38 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:36.106 13:38:38 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:18:36.106 13:38:38 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:36.106 13:38:38 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:36.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.106 --rc genhtml_branch_coverage=1 00:18:36.106 --rc genhtml_function_coverage=1 00:18:36.106 --rc genhtml_legend=1 00:18:36.106 --rc geninfo_all_blocks=1 00:18:36.106 --rc geninfo_unexecuted_blocks=1 00:18:36.106 00:18:36.106 ' 00:18:36.106 13:38:38 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:36.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.106 --rc genhtml_branch_coverage=1 00:18:36.106 --rc genhtml_function_coverage=1 00:18:36.106 --rc genhtml_legend=1 00:18:36.106 --rc geninfo_all_blocks=1 00:18:36.106 --rc geninfo_unexecuted_blocks=1 00:18:36.106 00:18:36.106 ' 00:18:36.106 13:38:38 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:36.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.106 --rc genhtml_branch_coverage=1 00:18:36.106 --rc genhtml_function_coverage=1 00:18:36.106 --rc genhtml_legend=1 00:18:36.106 --rc geninfo_all_blocks=1 00:18:36.106 --rc geninfo_unexecuted_blocks=1 00:18:36.106 00:18:36.106 ' 00:18:36.106 13:38:38 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:36.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.106 --rc genhtml_branch_coverage=1 00:18:36.106 --rc genhtml_function_coverage=1 00:18:36.106 --rc genhtml_legend=1 00:18:36.106 --rc geninfo_all_blocks=1 00:18:36.106 --rc geninfo_unexecuted_blocks=1 00:18:36.106 00:18:36.106 ' 00:18:36.106 13:38:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:36.106 13:38:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:36.106 13:38:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:36.106 13:38:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:18:36.106 13:38:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:18:36.106 13:38:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:36.106 13:38:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:18:36.106 13:38:38 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:36.106 13:38:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:36.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.106 13:38:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57967 00:18:36.106 13:38:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57967 00:18:36.106 13:38:38 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57967 ']' 00:18:36.106 13:38:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:36.106 13:38:38 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.106 13:38:38 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.106 13:38:38 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.106 13:38:38 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.106 13:38:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:36.106 [2024-11-20 13:38:38.839683] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:18:36.106 [2024-11-20 13:38:38.839883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57967 ] 00:18:36.378 [2024-11-20 13:38:39.027483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:36.378 [2024-11-20 13:38:39.179365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.378 [2024-11-20 13:38:39.179367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.316 13:38:40 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.316 13:38:40 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:18:37.316 13:38:40 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:18:37.316 13:38:40 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57984 00:18:37.316 13:38:40 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:18:37.575 [ 00:18:37.575 "bdev_malloc_delete", 00:18:37.575 "bdev_malloc_create", 00:18:37.575 "bdev_null_resize", 00:18:37.575 "bdev_null_delete", 00:18:37.575 "bdev_null_create", 00:18:37.575 "bdev_nvme_cuse_unregister", 00:18:37.575 "bdev_nvme_cuse_register", 00:18:37.575 "bdev_opal_new_user", 00:18:37.575 "bdev_opal_set_lock_state", 00:18:37.575 "bdev_opal_delete", 00:18:37.575 "bdev_opal_get_info", 00:18:37.575 "bdev_opal_create", 00:18:37.575 "bdev_nvme_opal_revert", 00:18:37.575 "bdev_nvme_opal_init", 00:18:37.575 "bdev_nvme_send_cmd", 00:18:37.575 "bdev_nvme_set_keys", 00:18:37.575 "bdev_nvme_get_path_iostat", 00:18:37.575 "bdev_nvme_get_mdns_discovery_info", 00:18:37.575 "bdev_nvme_stop_mdns_discovery", 00:18:37.575 "bdev_nvme_start_mdns_discovery", 00:18:37.575 "bdev_nvme_set_multipath_policy", 00:18:37.575 "bdev_nvme_set_preferred_path", 00:18:37.575 "bdev_nvme_get_io_paths", 00:18:37.575 "bdev_nvme_remove_error_injection", 00:18:37.575 "bdev_nvme_add_error_injection", 00:18:37.575 "bdev_nvme_get_discovery_info", 00:18:37.575 "bdev_nvme_stop_discovery", 00:18:37.575 "bdev_nvme_start_discovery", 00:18:37.575 "bdev_nvme_get_controller_health_info", 00:18:37.575 "bdev_nvme_disable_controller", 00:18:37.575 "bdev_nvme_enable_controller", 00:18:37.575 "bdev_nvme_reset_controller", 00:18:37.575 "bdev_nvme_get_transport_statistics", 00:18:37.575 "bdev_nvme_apply_firmware", 00:18:37.575 "bdev_nvme_detach_controller", 00:18:37.575 "bdev_nvme_get_controllers", 00:18:37.575 "bdev_nvme_attach_controller", 00:18:37.575 "bdev_nvme_set_hotplug", 00:18:37.575 "bdev_nvme_set_options", 00:18:37.575 "bdev_passthru_delete", 00:18:37.575 "bdev_passthru_create", 00:18:37.575 "bdev_lvol_set_parent_bdev", 00:18:37.575 "bdev_lvol_set_parent", 00:18:37.575 "bdev_lvol_check_shallow_copy", 00:18:37.575 "bdev_lvol_start_shallow_copy", 00:18:37.575 "bdev_lvol_grow_lvstore", 00:18:37.575 "bdev_lvol_get_lvols", 00:18:37.575 "bdev_lvol_get_lvstores", 00:18:37.575 "bdev_lvol_delete", 00:18:37.575 "bdev_lvol_set_read_only", 00:18:37.575 "bdev_lvol_resize", 00:18:37.575 "bdev_lvol_decouple_parent", 00:18:37.575 "bdev_lvol_inflate", 00:18:37.575 "bdev_lvol_rename", 00:18:37.575 "bdev_lvol_clone_bdev", 00:18:37.575 "bdev_lvol_clone", 00:18:37.575 "bdev_lvol_snapshot", 00:18:37.575 "bdev_lvol_create", 00:18:37.575 "bdev_lvol_delete_lvstore", 00:18:37.575 "bdev_lvol_rename_lvstore", 00:18:37.575 "bdev_lvol_create_lvstore", 00:18:37.575 "bdev_raid_set_options", 00:18:37.575 "bdev_raid_remove_base_bdev", 00:18:37.575 "bdev_raid_add_base_bdev", 00:18:37.575 "bdev_raid_delete", 00:18:37.575 "bdev_raid_create", 00:18:37.575 "bdev_raid_get_bdevs", 00:18:37.575 "bdev_error_inject_error", 00:18:37.575 "bdev_error_delete", 00:18:37.575 "bdev_error_create", 00:18:37.575 "bdev_split_delete", 00:18:37.575 "bdev_split_create", 00:18:37.575 "bdev_delay_delete", 00:18:37.575 "bdev_delay_create", 00:18:37.575 "bdev_delay_update_latency", 00:18:37.575 "bdev_zone_block_delete", 00:18:37.575 "bdev_zone_block_create", 00:18:37.575 "blobfs_create", 00:18:37.575 "blobfs_detect", 00:18:37.575 "blobfs_set_cache_size", 00:18:37.575 "bdev_aio_delete", 00:18:37.575 "bdev_aio_rescan", 00:18:37.575 "bdev_aio_create", 00:18:37.575 "bdev_ftl_set_property", 00:18:37.575 "bdev_ftl_get_properties", 00:18:37.575 "bdev_ftl_get_stats", 00:18:37.575 "bdev_ftl_unmap", 00:18:37.575 "bdev_ftl_unload", 00:18:37.575 "bdev_ftl_delete", 00:18:37.575 "bdev_ftl_load", 00:18:37.575 "bdev_ftl_create", 00:18:37.575 "bdev_virtio_attach_controller", 00:18:37.575 "bdev_virtio_scsi_get_devices", 00:18:37.575 "bdev_virtio_detach_controller", 00:18:37.575 "bdev_virtio_blk_set_hotplug", 00:18:37.575 "bdev_iscsi_delete", 00:18:37.575 "bdev_iscsi_create", 00:18:37.575 "bdev_iscsi_set_options", 00:18:37.575 "accel_error_inject_error", 00:18:37.575 "ioat_scan_accel_module", 00:18:37.575 "dsa_scan_accel_module", 00:18:37.575 "iaa_scan_accel_module", 00:18:37.575 "keyring_file_remove_key", 00:18:37.575 "keyring_file_add_key", 00:18:37.575 "keyring_linux_set_options", 00:18:37.575 "fsdev_aio_delete", 00:18:37.575 "fsdev_aio_create", 00:18:37.575 "iscsi_get_histogram", 00:18:37.575 "iscsi_enable_histogram", 00:18:37.575 "iscsi_set_options", 00:18:37.575 "iscsi_get_auth_groups", 00:18:37.575 "iscsi_auth_group_remove_secret", 00:18:37.575 "iscsi_auth_group_add_secret", 00:18:37.575 "iscsi_delete_auth_group", 00:18:37.575 "iscsi_create_auth_group", 00:18:37.575 "iscsi_set_discovery_auth", 00:18:37.575 "iscsi_get_options", 00:18:37.575 "iscsi_target_node_request_logout", 00:18:37.575 "iscsi_target_node_set_redirect", 00:18:37.575 "iscsi_target_node_set_auth", 00:18:37.575 "iscsi_target_node_add_lun", 00:18:37.575 "iscsi_get_stats", 00:18:37.575 "iscsi_get_connections", 00:18:37.575 "iscsi_portal_group_set_auth", 00:18:37.575 "iscsi_start_portal_group", 00:18:37.575 "iscsi_delete_portal_group", 00:18:37.575 "iscsi_create_portal_group", 00:18:37.575 "iscsi_get_portal_groups", 00:18:37.575 "iscsi_delete_target_node", 00:18:37.575 "iscsi_target_node_remove_pg_ig_maps", 00:18:37.575 "iscsi_target_node_add_pg_ig_maps", 00:18:37.575 "iscsi_create_target_node", 00:18:37.575 "iscsi_get_target_nodes", 00:18:37.575 "iscsi_delete_initiator_group", 00:18:37.575 "iscsi_initiator_group_remove_initiators", 00:18:37.575 "iscsi_initiator_group_add_initiators", 00:18:37.575 "iscsi_create_initiator_group", 00:18:37.575 "iscsi_get_initiator_groups", 00:18:37.575 "nvmf_set_crdt", 00:18:37.575 "nvmf_set_config", 00:18:37.575 "nvmf_set_max_subsystems", 00:18:37.575 "nvmf_stop_mdns_prr", 00:18:37.575 "nvmf_publish_mdns_prr", 00:18:37.575 "nvmf_subsystem_get_listeners", 00:18:37.575 "nvmf_subsystem_get_qpairs", 00:18:37.575 "nvmf_subsystem_get_controllers", 00:18:37.575 "nvmf_get_stats", 00:18:37.575 "nvmf_get_transports", 00:18:37.575 "nvmf_create_transport", 00:18:37.575 "nvmf_get_targets", 00:18:37.575 "nvmf_delete_target", 00:18:37.575 "nvmf_create_target", 00:18:37.575 "nvmf_subsystem_allow_any_host", 00:18:37.576 "nvmf_subsystem_set_keys", 00:18:37.576 "nvmf_subsystem_remove_host", 00:18:37.576 "nvmf_subsystem_add_host", 00:18:37.576 "nvmf_ns_remove_host", 00:18:37.576 "nvmf_ns_add_host", 00:18:37.576 "nvmf_subsystem_remove_ns", 00:18:37.576 "nvmf_subsystem_set_ns_ana_group", 00:18:37.576 "nvmf_subsystem_add_ns", 00:18:37.576 "nvmf_subsystem_listener_set_ana_state", 00:18:37.576 "nvmf_discovery_get_referrals", 00:18:37.576 "nvmf_discovery_remove_referral", 00:18:37.576 "nvmf_discovery_add_referral", 00:18:37.576 "nvmf_subsystem_remove_listener", 00:18:37.576 "nvmf_subsystem_add_listener", 00:18:37.576 "nvmf_delete_subsystem", 00:18:37.576 "nvmf_create_subsystem", 00:18:37.576 "nvmf_get_subsystems", 00:18:37.576 "env_dpdk_get_mem_stats", 00:18:37.576 "nbd_get_disks", 00:18:37.576 "nbd_stop_disk", 00:18:37.576 "nbd_start_disk", 00:18:37.576 "ublk_recover_disk", 00:18:37.576 "ublk_get_disks", 00:18:37.576 "ublk_stop_disk", 00:18:37.576 "ublk_start_disk", 00:18:37.576 "ublk_destroy_target", 00:18:37.576 "ublk_create_target", 00:18:37.576 "virtio_blk_create_transport", 00:18:37.576 "virtio_blk_get_transports", 00:18:37.576 "vhost_controller_set_coalescing", 00:18:37.576 "vhost_get_controllers", 00:18:37.576 "vhost_delete_controller", 00:18:37.576 "vhost_create_blk_controller", 00:18:37.576 "vhost_scsi_controller_remove_target", 00:18:37.576 "vhost_scsi_controller_add_target", 00:18:37.576 "vhost_start_scsi_controller", 00:18:37.576 "vhost_create_scsi_controller", 00:18:37.576 "thread_set_cpumask", 00:18:37.576 "scheduler_set_options", 00:18:37.576 "framework_get_governor", 00:18:37.576 "framework_get_scheduler", 00:18:37.576 "framework_set_scheduler", 00:18:37.576 "framework_get_reactors", 00:18:37.576 "thread_get_io_channels", 00:18:37.576 "thread_get_pollers", 00:18:37.576 "thread_get_stats", 00:18:37.576 "framework_monitor_context_switch", 00:18:37.576 "spdk_kill_instance", 00:18:37.576 "log_enable_timestamps", 00:18:37.576 "log_get_flags", 00:18:37.576 "log_clear_flag", 00:18:37.576 "log_set_flag", 00:18:37.576 "log_get_level", 00:18:37.576 "log_set_level", 00:18:37.576 "log_get_print_level", 00:18:37.576 "log_set_print_level", 00:18:37.576 "framework_enable_cpumask_locks", 00:18:37.576 "framework_disable_cpumask_locks", 00:18:37.576 "framework_wait_init", 00:18:37.576 "framework_start_init", 00:18:37.576 "scsi_get_devices", 00:18:37.576 "bdev_get_histogram", 00:18:37.576 "bdev_enable_histogram", 00:18:37.576 "bdev_set_qos_limit", 00:18:37.576 "bdev_set_qd_sampling_period", 00:18:37.576 "bdev_get_bdevs", 00:18:37.576 "bdev_reset_iostat", 00:18:37.576 "bdev_get_iostat", 00:18:37.576 "bdev_examine", 00:18:37.576 "bdev_wait_for_examine", 00:18:37.576 "bdev_set_options", 00:18:37.576 "accel_get_stats", 00:18:37.576 "accel_set_options", 00:18:37.576 "accel_set_driver", 00:18:37.576 "accel_crypto_key_destroy", 00:18:37.576 "accel_crypto_keys_get", 00:18:37.576 "accel_crypto_key_create", 00:18:37.576 "accel_assign_opc", 00:18:37.576 "accel_get_module_info", 00:18:37.576 "accel_get_opc_assignments", 00:18:37.576 "vmd_rescan", 00:18:37.576 "vmd_remove_device", 00:18:37.576 "vmd_enable", 00:18:37.576 "sock_get_default_impl", 00:18:37.576 "sock_set_default_impl", 00:18:37.576 "sock_impl_set_options", 00:18:37.576 "sock_impl_get_options", 00:18:37.576 "iobuf_get_stats", 00:18:37.576 "iobuf_set_options", 00:18:37.576 "keyring_get_keys", 00:18:37.576 "framework_get_pci_devices", 00:18:37.576 "framework_get_config", 00:18:37.576 "framework_get_subsystems", 00:18:37.576 "fsdev_set_opts", 00:18:37.576 "fsdev_get_opts", 00:18:37.576 "trace_get_info", 00:18:37.576 "trace_get_tpoint_group_mask", 00:18:37.576 "trace_disable_tpoint_group", 00:18:37.576 "trace_enable_tpoint_group", 00:18:37.576 "trace_clear_tpoint_mask", 00:18:37.576 "trace_set_tpoint_mask", 00:18:37.576 "notify_get_notifications", 00:18:37.576 "notify_get_types", 00:18:37.576 "spdk_get_version", 00:18:37.576 "rpc_get_methods" 00:18:37.576 ] 00:18:37.576 13:38:40 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:18:37.576 13:38:40 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:37.576 13:38:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:37.576 13:38:40 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:37.576 13:38:40 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57967 00:18:37.576 13:38:40 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57967 ']' 00:18:37.576 13:38:40 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57967 00:18:37.576 13:38:40 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:18:37.576 13:38:40 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.576 13:38:40 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57967 00:18:37.576 killing process with pid 57967 00:18:37.576 13:38:40 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:37.576 13:38:40 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:37.576 13:38:40 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57967' 00:18:37.576 13:38:40 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57967 00:18:37.576 13:38:40 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57967 00:18:40.106 ************************************ 00:18:40.106 END TEST spdkcli_tcp 00:18:40.106 ************************************ 00:18:40.106 00:18:40.106 real 0m4.105s 00:18:40.106 user 0m7.389s 00:18:40.106 sys 0m0.667s 00:18:40.106 13:38:42 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.106 13:38:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:40.106 13:38:42 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:18:40.106 13:38:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:40.106 13:38:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.106 13:38:42 -- common/autotest_common.sh@10 -- # set +x 00:18:40.106 ************************************ 00:18:40.106 START TEST dpdk_mem_utility 00:18:40.106 ************************************ 00:18:40.106 13:38:42 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:18:40.106 * Looking for test storage... 00:18:40.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:18:40.107 13:38:42 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:40.107 13:38:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:18:40.107 13:38:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:40.107 13:38:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.107 13:38:42 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:18:40.107 13:38:42 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.107 13:38:42 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:40.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.107 --rc genhtml_branch_coverage=1 00:18:40.107 --rc genhtml_function_coverage=1 00:18:40.107 --rc genhtml_legend=1 00:18:40.107 --rc geninfo_all_blocks=1 00:18:40.107 --rc geninfo_unexecuted_blocks=1 00:18:40.107 00:18:40.107 ' 00:18:40.107 13:38:42 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:40.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.107 --rc genhtml_branch_coverage=1 00:18:40.107 --rc genhtml_function_coverage=1 00:18:40.107 --rc genhtml_legend=1 00:18:40.107 --rc geninfo_all_blocks=1 00:18:40.107 --rc geninfo_unexecuted_blocks=1 00:18:40.107 00:18:40.107 ' 00:18:40.107 13:38:42 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:40.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.107 --rc genhtml_branch_coverage=1 00:18:40.107 --rc genhtml_function_coverage=1 00:18:40.107 --rc genhtml_legend=1 00:18:40.107 --rc geninfo_all_blocks=1 00:18:40.107 --rc geninfo_unexecuted_blocks=1 00:18:40.107 00:18:40.107 ' 00:18:40.107 13:38:42 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:40.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.107 --rc genhtml_branch_coverage=1 00:18:40.107 --rc genhtml_function_coverage=1 00:18:40.107 --rc genhtml_legend=1 00:18:40.107 --rc geninfo_all_blocks=1 00:18:40.107 --rc geninfo_unexecuted_blocks=1 00:18:40.107 00:18:40.107 ' 00:18:40.107 13:38:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:18:40.107 13:38:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58089 00:18:40.107 13:38:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58089 00:18:40.107 13:38:42 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58089 ']' 00:18:40.107 13:38:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:40.107 13:38:42 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.107 13:38:42 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.107 13:38:42 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.107 13:38:42 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.107 13:38:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:18:40.107 [2024-11-20 13:38:42.989765] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:18:40.107 [2024-11-20 13:38:42.990465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58089 ] 00:18:40.365 [2024-11-20 13:38:43.174528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.624 [2024-11-20 13:38:43.304795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.559 13:38:44 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.559 13:38:44 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:18:41.559 13:38:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:18:41.559 13:38:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:18:41.559 13:38:44 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.559 13:38:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:18:41.559 { 00:18:41.559 "filename": "/tmp/spdk_mem_dump.txt" 00:18:41.559 } 00:18:41.559 13:38:44 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.560 13:38:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:18:41.560 DPDK memory size 824.000000 MiB in 1 heap(s) 00:18:41.560 1 heaps totaling size 824.000000 MiB 00:18:41.560 size: 824.000000 MiB heap id: 0 00:18:41.560 end heaps---------- 00:18:41.560 9 mempools totaling size 603.782043 MiB 00:18:41.560 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:18:41.560 size: 158.602051 MiB name: PDU_data_out_Pool 00:18:41.560 size: 100.555481 MiB name: bdev_io_58089 00:18:41.560 size: 50.003479 MiB name: msgpool_58089 00:18:41.560 size: 36.509338 MiB name: fsdev_io_58089 00:18:41.560 size: 21.763794 MiB name: PDU_Pool 00:18:41.560 size: 19.513306 MiB name: SCSI_TASK_Pool 00:18:41.560 size: 4.133484 MiB name: evtpool_58089 00:18:41.560 size: 0.026123 MiB name: Session_Pool 00:18:41.560 end mempools------- 00:18:41.560 6 memzones totaling size 4.142822 MiB 00:18:41.560 size: 1.000366 MiB name: RG_ring_0_58089 00:18:41.560 size: 1.000366 MiB name: RG_ring_1_58089 00:18:41.560 size: 1.000366 MiB name: RG_ring_4_58089 00:18:41.560 size: 1.000366 MiB name: RG_ring_5_58089 00:18:41.560 size: 0.125366 MiB name: RG_ring_2_58089 00:18:41.560 size: 0.015991 MiB name: RG_ring_3_58089 00:18:41.560 end memzones------- 00:18:41.560 13:38:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:18:41.560 heap id: 0 total size: 824.000000 MiB number of busy elements: 316 number of free elements: 18 00:18:41.560 list of free elements. size: 16.781128 MiB 00:18:41.560 element at address: 0x200006400000 with size: 1.995972 MiB 00:18:41.560 element at address: 0x20000a600000 with size: 1.995972 MiB 00:18:41.560 element at address: 0x200003e00000 with size: 1.991028 MiB 00:18:41.560 element at address: 0x200019500040 with size: 0.999939 MiB 00:18:41.560 element at address: 0x200019900040 with size: 0.999939 MiB 00:18:41.560 element at address: 0x200019a00000 with size: 0.999084 MiB 00:18:41.560 element at address: 0x200032600000 with size: 0.994324 MiB 00:18:41.560 element at address: 0x200000400000 with size: 0.992004 MiB 00:18:41.560 element at address: 0x200019200000 with size: 0.959656 MiB 00:18:41.560 element at address: 0x200019d00040 with size: 0.936401 MiB 00:18:41.560 element at address: 0x200000200000 with size: 0.716980 MiB 00:18:41.560 element at address: 0x20001b400000 with size: 0.562439 MiB 00:18:41.560 element at address: 0x200000c00000 with size: 0.489197 MiB 00:18:41.560 element at address: 0x200019600000 with size: 0.487976 MiB 00:18:41.560 element at address: 0x200019e00000 with size: 0.485413 MiB 00:18:41.560 element at address: 0x200012c00000 with size: 0.433472 MiB 00:18:41.560 element at address: 0x200028800000 with size: 0.390442 MiB 00:18:41.560 element at address: 0x200000800000 with size: 0.350891 MiB 00:18:41.560 list of standard malloc elements. size: 199.287964 MiB 00:18:41.560 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:18:41.560 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:18:41.560 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:18:41.560 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:18:41.560 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:18:41.560 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:18:41.560 element at address: 0x200019deff40 with size: 0.062683 MiB 00:18:41.560 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:18:41.560 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:18:41.560 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:18:41.560 element at address: 0x200012bff040 with size: 0.000305 MiB 00:18:41.560 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:18:41.560 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:18:41.560 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:18:41.560 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:18:41.560 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:18:41.560 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:18:41.560 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:18:41.560 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:18:41.560 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:18:41.560 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:18:41.560 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:18:41.560 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:18:41.560 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:18:41.560 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:18:41.560 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:18:41.560 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:18:41.560 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:18:41.560 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:18:41.560 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:18:41.560 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:18:41.560 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:18:41.560 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200000cff000 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012bff180 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012bff280 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012bff380 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012bff480 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012bff580 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012bff680 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012bff780 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012bff880 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012bff980 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200019affc40 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:18:41.561 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:18:41.562 element at address: 0x200028863f40 with size: 0.000244 MiB 00:18:41.562 element at address: 0x200028864040 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886af80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886b080 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886b180 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886b280 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886b380 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886b480 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886b580 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886b680 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886b780 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886b880 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886b980 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886be80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886c080 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886c180 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886c280 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886c380 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886c480 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886c580 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886c680 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886c780 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886c880 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886c980 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886d080 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886d180 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886d280 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886d380 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886d480 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886d580 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886d680 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886d780 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886d880 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886d980 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886da80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886db80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886de80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886df80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886e080 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886e180 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886e280 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886e380 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886e480 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886e580 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886e680 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886e780 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886e880 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886e980 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886f080 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886f180 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886f280 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886f380 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886f480 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886f580 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886f680 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886f780 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886f880 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886f980 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:18:41.562 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:18:41.562 list of memzone associated elements. size: 607.930908 MiB 00:18:41.562 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:18:41.563 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:18:41.563 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:18:41.563 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:18:41.563 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:18:41.563 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58089_0 00:18:41.563 element at address: 0x200000dff340 with size: 48.003113 MiB 00:18:41.563 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58089_0 00:18:41.563 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:18:41.563 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58089_0 00:18:41.563 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:18:41.563 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:18:41.563 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:18:41.563 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:18:41.563 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:18:41.563 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58089_0 00:18:41.563 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:18:41.563 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58089 00:18:41.563 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:18:41.563 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58089 00:18:41.563 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:18:41.563 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:18:41.563 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:18:41.563 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:18:41.563 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:18:41.563 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:18:41.563 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:18:41.563 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:18:41.563 element at address: 0x200000cff100 with size: 1.000549 MiB 00:18:41.563 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58089 00:18:41.563 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:18:41.563 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58089 00:18:41.563 element at address: 0x200019affd40 with size: 1.000549 MiB 00:18:41.563 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58089 00:18:41.563 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:18:41.563 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58089 00:18:41.563 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:18:41.563 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58089 00:18:41.563 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:18:41.563 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58089 00:18:41.563 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:18:41.563 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:18:41.563 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:18:41.563 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:18:41.563 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:18:41.563 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:18:41.563 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:18:41.563 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58089 00:18:41.563 element at address: 0x20000085df80 with size: 0.125549 MiB 00:18:41.563 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58089 00:18:41.563 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:18:41.563 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:18:41.563 element at address: 0x200028864140 with size: 0.023804 MiB 00:18:41.563 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:18:41.563 element at address: 0x200000859d40 with size: 0.016174 MiB 00:18:41.563 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58089 00:18:41.563 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:18:41.563 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:18:41.563 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:18:41.563 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58089 00:18:41.563 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:18:41.563 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58089 00:18:41.563 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:18:41.563 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58089 00:18:41.563 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:18:41.563 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:18:41.563 13:38:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:18:41.563 13:38:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58089 00:18:41.563 13:38:44 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58089 ']' 00:18:41.563 13:38:44 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58089 00:18:41.563 13:38:44 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:18:41.563 13:38:44 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.563 13:38:44 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58089 00:18:41.563 13:38:44 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.563 13:38:44 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.563 killing process with pid 58089 00:18:41.563 13:38:44 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58089' 00:18:41.563 13:38:44 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58089 00:18:41.563 13:38:44 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58089 00:18:44.096 00:18:44.096 real 0m3.911s 00:18:44.096 user 0m3.933s 00:18:44.096 sys 0m0.639s 00:18:44.096 13:38:46 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.096 ************************************ 00:18:44.096 END TEST dpdk_mem_utility 00:18:44.096 ************************************ 00:18:44.096 13:38:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:18:44.096 13:38:46 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:18:44.096 13:38:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:44.096 13:38:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.096 13:38:46 -- common/autotest_common.sh@10 -- # set +x 00:18:44.096 ************************************ 00:18:44.096 START TEST event 00:18:44.096 ************************************ 00:18:44.096 13:38:46 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:18:44.096 * Looking for test storage... 00:18:44.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:18:44.096 13:38:46 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:44.096 13:38:46 event -- common/autotest_common.sh@1693 -- # lcov --version 00:18:44.096 13:38:46 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:44.096 13:38:46 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:44.096 13:38:46 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:44.096 13:38:46 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:44.096 13:38:46 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:44.096 13:38:46 event -- scripts/common.sh@336 -- # IFS=.-: 00:18:44.096 13:38:46 event -- scripts/common.sh@336 -- # read -ra ver1 00:18:44.096 13:38:46 event -- scripts/common.sh@337 -- # IFS=.-: 00:18:44.096 13:38:46 event -- scripts/common.sh@337 -- # read -ra ver2 00:18:44.097 13:38:46 event -- scripts/common.sh@338 -- # local 'op=<' 00:18:44.097 13:38:46 event -- scripts/common.sh@340 -- # ver1_l=2 00:18:44.097 13:38:46 event -- scripts/common.sh@341 -- # ver2_l=1 00:18:44.097 13:38:46 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:44.097 13:38:46 event -- scripts/common.sh@344 -- # case "$op" in 00:18:44.097 13:38:46 event -- scripts/common.sh@345 -- # : 1 00:18:44.097 13:38:46 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:44.097 13:38:46 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.097 13:38:46 event -- scripts/common.sh@365 -- # decimal 1 00:18:44.097 13:38:46 event -- scripts/common.sh@353 -- # local d=1 00:18:44.097 13:38:46 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:44.097 13:38:46 event -- scripts/common.sh@355 -- # echo 1 00:18:44.097 13:38:46 event -- scripts/common.sh@365 -- # ver1[v]=1 00:18:44.097 13:38:46 event -- scripts/common.sh@366 -- # decimal 2 00:18:44.097 13:38:46 event -- scripts/common.sh@353 -- # local d=2 00:18:44.097 13:38:46 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:44.097 13:38:46 event -- scripts/common.sh@355 -- # echo 2 00:18:44.097 13:38:46 event -- scripts/common.sh@366 -- # ver2[v]=2 00:18:44.097 13:38:46 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:44.097 13:38:46 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:44.097 13:38:46 event -- scripts/common.sh@368 -- # return 0 00:18:44.097 13:38:46 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:44.097 13:38:46 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:44.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.097 --rc genhtml_branch_coverage=1 00:18:44.097 --rc genhtml_function_coverage=1 00:18:44.097 --rc genhtml_legend=1 00:18:44.097 --rc geninfo_all_blocks=1 00:18:44.097 --rc geninfo_unexecuted_blocks=1 00:18:44.097 00:18:44.097 ' 00:18:44.097 13:38:46 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:44.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.097 --rc genhtml_branch_coverage=1 00:18:44.097 --rc genhtml_function_coverage=1 00:18:44.097 --rc genhtml_legend=1 00:18:44.097 --rc geninfo_all_blocks=1 00:18:44.097 --rc geninfo_unexecuted_blocks=1 00:18:44.097 00:18:44.097 ' 00:18:44.097 13:38:46 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:44.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.097 --rc genhtml_branch_coverage=1 00:18:44.097 --rc genhtml_function_coverage=1 00:18:44.097 --rc genhtml_legend=1 00:18:44.097 --rc geninfo_all_blocks=1 00:18:44.097 --rc geninfo_unexecuted_blocks=1 00:18:44.097 00:18:44.097 ' 00:18:44.097 13:38:46 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:44.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.097 --rc genhtml_branch_coverage=1 00:18:44.097 --rc genhtml_function_coverage=1 00:18:44.097 --rc genhtml_legend=1 00:18:44.097 --rc geninfo_all_blocks=1 00:18:44.097 --rc geninfo_unexecuted_blocks=1 00:18:44.097 00:18:44.097 ' 00:18:44.097 13:38:46 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:44.097 13:38:46 event -- bdev/nbd_common.sh@6 -- # set -e 00:18:44.097 13:38:46 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:18:44.097 13:38:46 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:18:44.097 13:38:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.097 13:38:46 event -- common/autotest_common.sh@10 -- # set +x 00:18:44.097 ************************************ 00:18:44.097 START TEST event_perf 00:18:44.097 ************************************ 00:18:44.097 13:38:46 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:18:44.097 Running I/O for 1 seconds...[2024-11-20 13:38:46.859459] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:18:44.097 [2024-11-20 13:38:46.859620] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58197 ] 00:18:44.356 [2024-11-20 13:38:47.048273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:44.356 [2024-11-20 13:38:47.208979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.356 [2024-11-20 13:38:47.209221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.356 [2024-11-20 13:38:47.209319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:44.356 [2024-11-20 13:38:47.209474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.732 Running I/O for 1 seconds... 00:18:45.732 lcore 0: 129614 00:18:45.732 lcore 1: 129614 00:18:45.732 lcore 2: 129615 00:18:45.732 lcore 3: 129615 00:18:45.732 done. 00:18:45.732 00:18:45.732 real 0m1.653s 00:18:45.732 user 0m4.384s 00:18:45.732 sys 0m0.142s 00:18:45.732 13:38:48 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:45.732 13:38:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:18:45.732 ************************************ 00:18:45.732 END TEST event_perf 00:18:45.732 ************************************ 00:18:45.732 13:38:48 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:18:45.732 13:38:48 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:45.732 13:38:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:45.732 13:38:48 event -- common/autotest_common.sh@10 -- # set +x 00:18:45.732 ************************************ 00:18:45.732 START TEST event_reactor 00:18:45.732 ************************************ 00:18:45.732 13:38:48 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:18:45.732 [2024-11-20 13:38:48.576718] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:18:45.732 [2024-11-20 13:38:48.576960] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58237 ] 00:18:45.992 [2024-11-20 13:38:48.782287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.250 [2024-11-20 13:38:48.919314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.625 test_start 00:18:47.625 oneshot 00:18:47.625 tick 100 00:18:47.625 tick 100 00:18:47.625 tick 250 00:18:47.625 tick 100 00:18:47.625 tick 100 00:18:47.625 tick 100 00:18:47.625 tick 250 00:18:47.625 tick 500 00:18:47.625 tick 100 00:18:47.625 tick 100 00:18:47.625 tick 250 00:18:47.625 tick 100 00:18:47.625 tick 100 00:18:47.625 test_end 00:18:47.625 00:18:47.625 real 0m1.627s 00:18:47.625 user 0m1.405s 00:18:47.625 sys 0m0.110s 00:18:47.625 13:38:50 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:47.625 ************************************ 00:18:47.625 END TEST event_reactor 00:18:47.625 ************************************ 00:18:47.625 13:38:50 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:18:47.625 13:38:50 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:18:47.625 13:38:50 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:47.625 13:38:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:47.625 13:38:50 event -- common/autotest_common.sh@10 -- # set +x 00:18:47.625 ************************************ 00:18:47.625 START TEST event_reactor_perf 00:18:47.625 ************************************ 00:18:47.625 13:38:50 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:18:47.625 [2024-11-20 13:38:50.256433] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:18:47.625 [2024-11-20 13:38:50.256619] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58273 ] 00:18:47.625 [2024-11-20 13:38:50.445994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.884 [2024-11-20 13:38:50.579499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.262 test_start 00:18:49.262 test_end 00:18:49.262 Performance: 289625 events per second 00:18:49.262 00:18:49.262 real 0m1.605s 00:18:49.262 user 0m1.387s 00:18:49.262 sys 0m0.107s 00:18:49.262 13:38:51 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:49.262 13:38:51 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:18:49.262 ************************************ 00:18:49.262 END TEST event_reactor_perf 00:18:49.262 ************************************ 00:18:49.262 13:38:51 event -- event/event.sh@49 -- # uname -s 00:18:49.262 13:38:51 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:18:49.262 13:38:51 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:18:49.262 13:38:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:49.262 13:38:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:49.262 13:38:51 event -- common/autotest_common.sh@10 -- # set +x 00:18:49.262 ************************************ 00:18:49.262 START TEST event_scheduler 00:18:49.262 ************************************ 00:18:49.262 13:38:51 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:18:49.262 * Looking for test storage... 00:18:49.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:18:49.262 13:38:51 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:49.262 13:38:51 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:49.262 13:38:51 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:18:49.262 13:38:52 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:49.262 13:38:52 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:18:49.262 13:38:52 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:49.262 13:38:52 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:49.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.262 --rc genhtml_branch_coverage=1 00:18:49.262 --rc genhtml_function_coverage=1 00:18:49.262 --rc genhtml_legend=1 00:18:49.262 --rc geninfo_all_blocks=1 00:18:49.262 --rc geninfo_unexecuted_blocks=1 00:18:49.262 00:18:49.262 ' 00:18:49.262 13:38:52 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:49.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.262 --rc genhtml_branch_coverage=1 00:18:49.262 --rc genhtml_function_coverage=1 00:18:49.262 --rc genhtml_legend=1 00:18:49.262 --rc geninfo_all_blocks=1 00:18:49.262 --rc geninfo_unexecuted_blocks=1 00:18:49.262 00:18:49.262 ' 00:18:49.262 13:38:52 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:49.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.262 --rc genhtml_branch_coverage=1 00:18:49.262 --rc genhtml_function_coverage=1 00:18:49.262 --rc genhtml_legend=1 00:18:49.262 --rc geninfo_all_blocks=1 00:18:49.262 --rc geninfo_unexecuted_blocks=1 00:18:49.262 00:18:49.262 ' 00:18:49.262 13:38:52 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:49.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.262 --rc genhtml_branch_coverage=1 00:18:49.262 --rc genhtml_function_coverage=1 00:18:49.262 --rc genhtml_legend=1 00:18:49.262 --rc geninfo_all_blocks=1 00:18:49.262 --rc geninfo_unexecuted_blocks=1 00:18:49.262 00:18:49.262 ' 00:18:49.262 13:38:52 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:18:49.262 13:38:52 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58349 00:18:49.262 13:38:52 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:18:49.262 13:38:52 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58349 00:18:49.262 13:38:52 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:18:49.262 13:38:52 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58349 ']' 00:18:49.262 13:38:52 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.262 13:38:52 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.262 13:38:52 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.262 13:38:52 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.262 13:38:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:49.262 [2024-11-20 13:38:52.165855] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:18:49.262 [2024-11-20 13:38:52.166101] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58349 ] 00:18:49.521 [2024-11-20 13:38:52.363146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:49.781 [2024-11-20 13:38:52.532548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.781 [2024-11-20 13:38:52.532649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.781 [2024-11-20 13:38:52.533744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:49.781 [2024-11-20 13:38:52.533802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.349 13:38:53 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.349 13:38:53 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:18:50.349 13:38:53 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:18:50.349 13:38:53 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.349 13:38:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:50.349 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:18:50.349 POWER: Cannot set governor of lcore 0 to userspace 00:18:50.349 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:18:50.349 POWER: Cannot set governor of lcore 0 to performance 00:18:50.349 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:18:50.349 POWER: Cannot set governor of lcore 0 to userspace 00:18:50.349 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:18:50.349 POWER: Cannot set governor of lcore 0 to userspace 00:18:50.349 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:18:50.349 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:18:50.349 POWER: Unable to set Power Management Environment for lcore 0 00:18:50.349 [2024-11-20 13:38:53.204812] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:18:50.349 [2024-11-20 13:38:53.204846] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:18:50.349 [2024-11-20 13:38:53.204864] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:18:50.349 [2024-11-20 13:38:53.204927] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:18:50.349 [2024-11-20 13:38:53.204947] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:18:50.349 [2024-11-20 13:38:53.204965] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:18:50.349 13:38:53 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.349 13:38:53 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:18:50.349 13:38:53 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.349 13:38:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:50.916 [2024-11-20 13:38:53.548470] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:18:50.916 13:38:53 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.916 13:38:53 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:18:50.916 13:38:53 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:50.916 13:38:53 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:50.916 13:38:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:50.916 ************************************ 00:18:50.916 START TEST scheduler_create_thread 00:18:50.916 ************************************ 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:50.916 2 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:50.916 3 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:50.916 4 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:50.916 5 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:50.916 6 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:50.916 7 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:50.916 8 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:50.916 9 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:50.916 10 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:18:50.916 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.917 13:38:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:52.296 13:38:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.296 13:38:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:18:52.296 13:38:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:18:52.296 13:38:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.296 13:38:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:53.342 13:38:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.342 00:18:53.342 real 0m2.618s 00:18:53.342 user 0m0.019s 00:18:53.342 sys 0m0.007s 00:18:53.342 13:38:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:53.342 ************************************ 00:18:53.342 END TEST scheduler_create_thread 00:18:53.342 13:38:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:53.342 ************************************ 00:18:53.342 13:38:56 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:53.342 13:38:56 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58349 00:18:53.342 13:38:56 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58349 ']' 00:18:53.342 13:38:56 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58349 00:18:53.342 13:38:56 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:18:53.342 13:38:56 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:53.342 13:38:56 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58349 00:18:53.342 13:38:56 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:53.342 13:38:56 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:53.342 13:38:56 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58349' 00:18:53.342 killing process with pid 58349 00:18:53.342 13:38:56 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58349 00:18:53.342 13:38:56 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58349 00:18:53.909 [2024-11-20 13:38:56.659409] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:18:54.844 00:18:54.844 real 0m5.892s 00:18:54.844 user 0m10.384s 00:18:54.844 sys 0m0.522s 00:18:54.844 13:38:57 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.844 13:38:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:54.844 ************************************ 00:18:54.844 END TEST event_scheduler 00:18:54.844 ************************************ 00:18:55.103 13:38:57 event -- event/event.sh@51 -- # modprobe -n nbd 00:18:55.103 13:38:57 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:18:55.103 13:38:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:55.103 13:38:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.103 13:38:57 event -- common/autotest_common.sh@10 -- # set +x 00:18:55.103 ************************************ 00:18:55.103 START TEST app_repeat 00:18:55.103 ************************************ 00:18:55.103 13:38:57 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:18:55.103 13:38:57 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:55.103 13:38:57 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:55.103 13:38:57 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:18:55.103 13:38:57 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:55.103 13:38:57 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:18:55.103 13:38:57 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:18:55.103 13:38:57 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:18:55.103 13:38:57 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58461 00:18:55.103 13:38:57 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:18:55.103 13:38:57 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:18:55.103 13:38:57 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58461' 00:18:55.103 Process app_repeat pid: 58461 00:18:55.103 13:38:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:18:55.103 spdk_app_start Round 0 00:18:55.103 13:38:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:18:55.103 13:38:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58461 /var/tmp/spdk-nbd.sock 00:18:55.103 13:38:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58461 ']' 00:18:55.103 13:38:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:55.103 13:38:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:55.103 13:38:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:55.103 13:38:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.103 13:38:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:55.103 [2024-11-20 13:38:57.864845] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:18:55.103 [2024-11-20 13:38:57.865019] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58461 ] 00:18:55.362 [2024-11-20 13:38:58.039345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:55.362 [2024-11-20 13:38:58.173958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.362 [2024-11-20 13:38:58.173961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.298 13:38:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.298 13:38:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:18:56.298 13:38:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:56.556 Malloc0 00:18:56.556 13:38:59 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:56.814 Malloc1 00:18:56.814 13:38:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:56.814 13:38:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:56.814 13:38:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:56.815 13:38:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:56.815 13:38:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:56.815 13:38:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:56.815 13:38:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:56.815 13:38:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:56.815 13:38:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:56.815 13:38:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:56.815 13:38:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:56.815 13:38:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:56.815 13:38:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:18:56.815 13:38:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:56.815 13:38:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:56.815 13:38:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:18:57.072 /dev/nbd0 00:18:57.072 13:38:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:57.072 13:38:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:57.072 13:38:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:57.072 13:38:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:18:57.072 13:38:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:57.072 13:38:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:57.072 13:38:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:57.072 13:38:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:18:57.072 13:38:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:57.073 13:38:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:57.073 13:38:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:57.073 1+0 records in 00:18:57.073 1+0 records out 00:18:57.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375409 s, 10.9 MB/s 00:18:57.073 13:38:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:57.073 13:38:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:18:57.073 13:38:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:57.073 13:38:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:57.073 13:38:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:18:57.073 13:38:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:57.073 13:38:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:57.073 13:38:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:18:57.331 /dev/nbd1 00:18:57.331 13:39:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:57.331 13:39:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:57.331 13:39:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:57.331 13:39:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:18:57.331 13:39:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:57.331 13:39:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:57.331 13:39:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:57.331 13:39:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:18:57.331 13:39:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:57.331 13:39:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:57.331 13:39:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:57.331 1+0 records in 00:18:57.331 1+0 records out 00:18:57.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033291 s, 12.3 MB/s 00:18:57.331 13:39:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:57.331 13:39:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:18:57.331 13:39:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:57.331 13:39:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:57.331 13:39:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:18:57.331 13:39:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:57.331 13:39:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:57.331 13:39:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:57.331 13:39:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:57.331 13:39:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:57.589 13:39:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:57.589 { 00:18:57.589 "nbd_device": "/dev/nbd0", 00:18:57.589 "bdev_name": "Malloc0" 00:18:57.589 }, 00:18:57.589 { 00:18:57.589 "nbd_device": "/dev/nbd1", 00:18:57.589 "bdev_name": "Malloc1" 00:18:57.589 } 00:18:57.589 ]' 00:18:57.589 13:39:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:57.589 { 00:18:57.589 "nbd_device": "/dev/nbd0", 00:18:57.589 "bdev_name": "Malloc0" 00:18:57.589 }, 00:18:57.589 { 00:18:57.589 "nbd_device": "/dev/nbd1", 00:18:57.589 "bdev_name": "Malloc1" 00:18:57.589 } 00:18:57.589 ]' 00:18:57.589 13:39:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:57.589 13:39:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:57.589 /dev/nbd1' 00:18:57.589 13:39:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:57.589 /dev/nbd1' 00:18:57.589 13:39:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:57.589 13:39:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:18:57.589 13:39:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:18:57.847 13:39:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:18:57.847 13:39:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:18:57.847 13:39:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:18:57.847 13:39:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:57.847 13:39:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:57.847 13:39:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:57.847 13:39:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:57.847 13:39:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:18:57.848 256+0 records in 00:18:57.848 256+0 records out 00:18:57.848 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00789322 s, 133 MB/s 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:57.848 256+0 records in 00:18:57.848 256+0 records out 00:18:57.848 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220499 s, 47.6 MB/s 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:57.848 256+0 records in 00:18:57.848 256+0 records out 00:18:57.848 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0340982 s, 30.8 MB/s 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:57.848 13:39:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:58.106 13:39:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:58.106 13:39:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:58.106 13:39:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:58.106 13:39:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:58.106 13:39:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:58.106 13:39:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:58.106 13:39:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:58.106 13:39:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:58.106 13:39:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:58.106 13:39:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:58.365 13:39:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:58.365 13:39:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:58.365 13:39:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:58.365 13:39:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:58.365 13:39:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:58.365 13:39:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:58.365 13:39:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:58.365 13:39:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:58.365 13:39:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:58.365 13:39:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:58.365 13:39:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:58.931 13:39:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:58.931 13:39:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:58.931 13:39:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:58.931 13:39:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:58.931 13:39:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:58.931 13:39:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:18:58.931 13:39:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:18:58.931 13:39:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:18:58.931 13:39:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:18:58.931 13:39:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:18:58.931 13:39:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:58.931 13:39:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:18:58.931 13:39:01 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:18:59.190 13:39:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:19:00.565 [2024-11-20 13:39:03.102778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:00.565 [2024-11-20 13:39:03.232421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.565 [2024-11-20 13:39:03.232438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.565 [2024-11-20 13:39:03.429109] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:19:00.565 [2024-11-20 13:39:03.429248] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:19:02.467 spdk_app_start Round 1 00:19:02.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:02.468 13:39:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:19:02.468 13:39:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:19:02.468 13:39:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58461 /var/tmp/spdk-nbd.sock 00:19:02.468 13:39:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58461 ']' 00:19:02.468 13:39:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:02.468 13:39:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.468 13:39:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:02.468 13:39:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.468 13:39:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:19:02.468 13:39:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.468 13:39:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:19:02.468 13:39:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:19:03.115 Malloc0 00:19:03.115 13:39:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:19:03.115 Malloc1 00:19:03.115 13:39:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:19:03.115 13:39:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:03.115 13:39:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:19:03.115 13:39:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:03.115 13:39:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:03.115 13:39:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:03.115 13:39:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:19:03.115 13:39:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:03.115 13:39:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:19:03.115 13:39:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:03.115 13:39:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:03.115 13:39:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:03.115 13:39:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:19:03.115 13:39:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:03.115 13:39:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:03.115 13:39:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:19:03.373 /dev/nbd0 00:19:03.632 13:39:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:03.632 13:39:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:03.632 13:39:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:03.632 13:39:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:19:03.632 13:39:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:03.632 13:39:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:03.632 13:39:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:03.632 13:39:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:19:03.632 13:39:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:03.632 13:39:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:03.632 13:39:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:19:03.632 1+0 records in 00:19:03.632 1+0 records out 00:19:03.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000713771 s, 5.7 MB/s 00:19:03.632 13:39:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:03.632 13:39:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:19:03.632 13:39:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:03.632 13:39:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:03.632 13:39:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:19:03.632 13:39:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:03.632 13:39:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:03.632 13:39:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:19:03.891 /dev/nbd1 00:19:03.891 13:39:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:03.891 13:39:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:03.891 13:39:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:03.891 13:39:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:19:03.891 13:39:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:03.891 13:39:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:03.891 13:39:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:03.891 13:39:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:19:03.891 13:39:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:03.891 13:39:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:03.891 13:39:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:19:03.891 1+0 records in 00:19:03.891 1+0 records out 00:19:03.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369812 s, 11.1 MB/s 00:19:03.891 13:39:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:03.891 13:39:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:19:03.891 13:39:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:03.891 13:39:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:03.891 13:39:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:19:03.891 13:39:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:03.891 13:39:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:03.891 13:39:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:03.891 13:39:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:03.891 13:39:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:04.149 13:39:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:04.149 { 00:19:04.149 "nbd_device": "/dev/nbd0", 00:19:04.149 "bdev_name": "Malloc0" 00:19:04.149 }, 00:19:04.149 { 00:19:04.149 "nbd_device": "/dev/nbd1", 00:19:04.149 "bdev_name": "Malloc1" 00:19:04.149 } 00:19:04.149 ]' 00:19:04.149 13:39:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:04.149 13:39:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:04.149 { 00:19:04.149 "nbd_device": "/dev/nbd0", 00:19:04.149 "bdev_name": "Malloc0" 00:19:04.149 }, 00:19:04.149 { 00:19:04.149 "nbd_device": "/dev/nbd1", 00:19:04.149 "bdev_name": "Malloc1" 00:19:04.149 } 00:19:04.149 ]' 00:19:04.149 13:39:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:04.149 /dev/nbd1' 00:19:04.149 13:39:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:04.149 /dev/nbd1' 00:19:04.149 13:39:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:04.149 13:39:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:19:04.149 13:39:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:19:04.149 13:39:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:19:04.150 13:39:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:19:04.150 13:39:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:19:04.150 13:39:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:04.150 13:39:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:04.150 13:39:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:04.150 13:39:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:04.150 13:39:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:04.150 13:39:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:19:04.150 256+0 records in 00:19:04.150 256+0 records out 00:19:04.150 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00869126 s, 121 MB/s 00:19:04.150 13:39:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:04.150 13:39:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:04.150 256+0 records in 00:19:04.150 256+0 records out 00:19:04.150 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0294441 s, 35.6 MB/s 00:19:04.150 13:39:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:04.150 13:39:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:04.408 256+0 records in 00:19:04.408 256+0 records out 00:19:04.408 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.034973 s, 30.0 MB/s 00:19:04.408 13:39:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:19:04.408 13:39:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:04.408 13:39:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:04.408 13:39:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:04.408 13:39:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:04.408 13:39:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:04.408 13:39:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:04.408 13:39:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:04.408 13:39:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:19:04.408 13:39:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:04.408 13:39:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:19:04.408 13:39:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:04.408 13:39:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:19:04.408 13:39:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:04.408 13:39:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:04.408 13:39:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:04.408 13:39:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:19:04.408 13:39:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:04.408 13:39:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:04.667 13:39:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:04.667 13:39:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:04.667 13:39:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:04.667 13:39:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:04.667 13:39:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:04.667 13:39:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:04.667 13:39:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:19:04.667 13:39:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:19:04.667 13:39:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:04.667 13:39:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:04.926 13:39:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:04.926 13:39:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:04.926 13:39:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:04.926 13:39:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:04.926 13:39:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:04.926 13:39:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:04.926 13:39:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:19:04.926 13:39:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:19:04.926 13:39:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:04.926 13:39:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:04.926 13:39:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:05.185 13:39:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:05.185 13:39:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:05.185 13:39:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:05.185 13:39:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:05.185 13:39:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:19:05.185 13:39:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:05.444 13:39:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:19:05.444 13:39:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:19:05.444 13:39:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:19:05.444 13:39:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:19:05.444 13:39:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:05.444 13:39:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:19:05.444 13:39:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:19:05.703 13:39:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:19:07.083 [2024-11-20 13:39:09.678215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:07.083 [2024-11-20 13:39:09.810327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.083 [2024-11-20 13:39:09.810332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.342 [2024-11-20 13:39:10.006836] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:19:07.342 [2024-11-20 13:39:10.006984] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:19:08.716 spdk_app_start Round 2 00:19:08.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:08.716 13:39:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:19:08.716 13:39:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:19:08.716 13:39:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58461 /var/tmp/spdk-nbd.sock 00:19:08.716 13:39:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58461 ']' 00:19:08.716 13:39:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:08.716 13:39:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.716 13:39:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:08.716 13:39:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.716 13:39:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:19:08.974 13:39:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.974 13:39:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:19:08.974 13:39:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:19:09.540 Malloc0 00:19:09.540 13:39:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:19:09.798 Malloc1 00:19:09.798 13:39:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:19:09.798 13:39:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.798 13:39:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:19:09.798 13:39:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:09.798 13:39:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:09.798 13:39:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:09.798 13:39:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:19:09.798 13:39:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.798 13:39:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:19:09.798 13:39:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:09.798 13:39:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:09.798 13:39:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:09.798 13:39:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:19:09.798 13:39:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:09.798 13:39:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:09.798 13:39:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:19:10.056 /dev/nbd0 00:19:10.056 13:39:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:10.056 13:39:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:10.056 13:39:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:10.056 13:39:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:19:10.056 13:39:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:10.056 13:39:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:10.056 13:39:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:10.056 13:39:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:19:10.056 13:39:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:10.056 13:39:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:10.056 13:39:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:19:10.056 1+0 records in 00:19:10.056 1+0 records out 00:19:10.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596924 s, 6.9 MB/s 00:19:10.056 13:39:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:10.056 13:39:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:19:10.056 13:39:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:10.056 13:39:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:10.056 13:39:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:19:10.056 13:39:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:10.056 13:39:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:10.056 13:39:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:19:10.315 /dev/nbd1 00:19:10.315 13:39:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:10.315 13:39:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:10.315 13:39:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:10.315 13:39:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:19:10.315 13:39:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:10.315 13:39:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:10.315 13:39:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:10.315 13:39:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:19:10.315 13:39:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:10.315 13:39:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:10.315 13:39:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:19:10.315 1+0 records in 00:19:10.315 1+0 records out 00:19:10.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304277 s, 13.5 MB/s 00:19:10.315 13:39:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:10.315 13:39:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:19:10.315 13:39:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:19:10.315 13:39:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:10.315 13:39:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:19:10.315 13:39:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:10.315 13:39:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:10.315 13:39:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:10.315 13:39:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:10.315 13:39:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:10.574 13:39:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:10.574 { 00:19:10.574 "nbd_device": "/dev/nbd0", 00:19:10.574 "bdev_name": "Malloc0" 00:19:10.574 }, 00:19:10.574 { 00:19:10.574 "nbd_device": "/dev/nbd1", 00:19:10.574 "bdev_name": "Malloc1" 00:19:10.574 } 00:19:10.574 ]' 00:19:10.574 13:39:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:10.574 { 00:19:10.574 "nbd_device": "/dev/nbd0", 00:19:10.574 "bdev_name": "Malloc0" 00:19:10.574 }, 00:19:10.574 { 00:19:10.574 "nbd_device": "/dev/nbd1", 00:19:10.574 "bdev_name": "Malloc1" 00:19:10.574 } 00:19:10.574 ]' 00:19:10.574 13:39:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:10.833 13:39:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:10.833 /dev/nbd1' 00:19:10.833 13:39:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:10.833 13:39:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:10.833 /dev/nbd1' 00:19:10.833 13:39:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:19:10.833 13:39:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:19:10.833 13:39:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:19:10.833 13:39:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:19:10.833 13:39:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:19:10.833 13:39:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:10.833 13:39:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:10.833 13:39:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:10.833 13:39:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:10.833 13:39:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:10.833 13:39:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:19:10.833 256+0 records in 00:19:10.833 256+0 records out 00:19:10.833 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00731827 s, 143 MB/s 00:19:10.833 13:39:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:10.833 13:39:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:10.833 256+0 records in 00:19:10.833 256+0 records out 00:19:10.833 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257044 s, 40.8 MB/s 00:19:10.833 13:39:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:10.833 13:39:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:10.833 256+0 records in 00:19:10.833 256+0 records out 00:19:10.833 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308009 s, 34.0 MB/s 00:19:10.833 13:39:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:19:10.833 13:39:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:10.833 13:39:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:10.833 13:39:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:10.833 13:39:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:10.834 13:39:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:10.834 13:39:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:10.834 13:39:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:10.834 13:39:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:19:10.834 13:39:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:10.834 13:39:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:19:10.834 13:39:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:19:10.834 13:39:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:19:10.834 13:39:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:10.834 13:39:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:10.834 13:39:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:10.834 13:39:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:19:10.834 13:39:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:10.834 13:39:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:11.091 13:39:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:11.091 13:39:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:11.091 13:39:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:11.091 13:39:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:11.091 13:39:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:11.091 13:39:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:11.091 13:39:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:19:11.091 13:39:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:19:11.091 13:39:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:11.091 13:39:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:11.349 13:39:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:11.349 13:39:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:11.349 13:39:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:11.349 13:39:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:11.349 13:39:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:11.349 13:39:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:11.349 13:39:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:19:11.349 13:39:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:19:11.349 13:39:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:11.349 13:39:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:11.349 13:39:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:11.915 13:39:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:11.915 13:39:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:11.915 13:39:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:11.915 13:39:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:11.915 13:39:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:19:11.915 13:39:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:11.915 13:39:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:19:11.915 13:39:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:19:11.915 13:39:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:19:11.915 13:39:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:19:11.915 13:39:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:11.915 13:39:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:19:11.915 13:39:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:19:12.173 13:39:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:19:13.643 [2024-11-20 13:39:16.148140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:13.643 [2024-11-20 13:39:16.278546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.643 [2024-11-20 13:39:16.278559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.643 [2024-11-20 13:39:16.475342] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:19:13.643 [2024-11-20 13:39:16.475478] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:19:15.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:15.545 13:39:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58461 /var/tmp/spdk-nbd.sock 00:19:15.545 13:39:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58461 ']' 00:19:15.545 13:39:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:15.545 13:39:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.545 13:39:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:15.545 13:39:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.545 13:39:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:19:15.545 13:39:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.545 13:39:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:19:15.545 13:39:18 event.app_repeat -- event/event.sh@39 -- # killprocess 58461 00:19:15.545 13:39:18 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58461 ']' 00:19:15.545 13:39:18 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58461 00:19:15.545 13:39:18 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:19:15.545 13:39:18 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:15.545 13:39:18 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58461 00:19:15.545 killing process with pid 58461 00:19:15.545 13:39:18 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:15.545 13:39:18 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:15.545 13:39:18 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58461' 00:19:15.545 13:39:18 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58461 00:19:15.545 13:39:18 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58461 00:19:16.480 spdk_app_start is called in Round 0. 00:19:16.480 Shutdown signal received, stop current app iteration 00:19:16.480 Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 reinitialization... 00:19:16.480 spdk_app_start is called in Round 1. 00:19:16.480 Shutdown signal received, stop current app iteration 00:19:16.480 Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 reinitialization... 00:19:16.480 spdk_app_start is called in Round 2. 00:19:16.480 Shutdown signal received, stop current app iteration 00:19:16.480 Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 reinitialization... 00:19:16.480 spdk_app_start is called in Round 3. 00:19:16.480 Shutdown signal received, stop current app iteration 00:19:16.480 13:39:19 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:19:16.480 13:39:19 event.app_repeat -- event/event.sh@42 -- # return 0 00:19:16.480 00:19:16.480 real 0m21.573s 00:19:16.480 user 0m47.685s 00:19:16.480 sys 0m3.093s 00:19:16.480 13:39:19 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:16.480 ************************************ 00:19:16.480 END TEST app_repeat 00:19:16.480 ************************************ 00:19:16.480 13:39:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:19:16.737 13:39:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:19:16.737 13:39:19 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:19:16.737 13:39:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:16.737 13:39:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:16.737 13:39:19 event -- common/autotest_common.sh@10 -- # set +x 00:19:16.737 ************************************ 00:19:16.737 START TEST cpu_locks 00:19:16.737 ************************************ 00:19:16.737 13:39:19 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:19:16.737 * Looking for test storage... 00:19:16.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:19:16.737 13:39:19 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:16.737 13:39:19 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:19:16.737 13:39:19 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:16.737 13:39:19 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:16.737 13:39:19 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:19:16.737 13:39:19 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:16.737 13:39:19 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:16.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.737 --rc genhtml_branch_coverage=1 00:19:16.737 --rc genhtml_function_coverage=1 00:19:16.737 --rc genhtml_legend=1 00:19:16.737 --rc geninfo_all_blocks=1 00:19:16.737 --rc geninfo_unexecuted_blocks=1 00:19:16.737 00:19:16.737 ' 00:19:16.737 13:39:19 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:16.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.737 --rc genhtml_branch_coverage=1 00:19:16.737 --rc genhtml_function_coverage=1 00:19:16.737 --rc genhtml_legend=1 00:19:16.737 --rc geninfo_all_blocks=1 00:19:16.737 --rc geninfo_unexecuted_blocks=1 00:19:16.737 00:19:16.737 ' 00:19:16.737 13:39:19 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:16.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.737 --rc genhtml_branch_coverage=1 00:19:16.737 --rc genhtml_function_coverage=1 00:19:16.737 --rc genhtml_legend=1 00:19:16.737 --rc geninfo_all_blocks=1 00:19:16.737 --rc geninfo_unexecuted_blocks=1 00:19:16.737 00:19:16.737 ' 00:19:16.737 13:39:19 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:16.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.737 --rc genhtml_branch_coverage=1 00:19:16.737 --rc genhtml_function_coverage=1 00:19:16.737 --rc genhtml_legend=1 00:19:16.737 --rc geninfo_all_blocks=1 00:19:16.737 --rc geninfo_unexecuted_blocks=1 00:19:16.737 00:19:16.737 ' 00:19:16.737 13:39:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:19:16.737 13:39:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:19:16.737 13:39:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:19:16.737 13:39:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:19:16.737 13:39:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:16.737 13:39:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:16.737 13:39:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:16.737 ************************************ 00:19:16.737 START TEST default_locks 00:19:16.737 ************************************ 00:19:16.737 13:39:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:19:16.737 13:39:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58930 00:19:16.737 13:39:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58930 00:19:16.737 13:39:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:16.737 13:39:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58930 ']' 00:19:16.737 13:39:19 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.737 13:39:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:16.737 13:39:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.737 13:39:19 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:16.737 13:39:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:19:16.995 [2024-11-20 13:39:19.748208] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:19:16.995 [2024-11-20 13:39:19.748411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58930 ] 00:19:17.253 [2024-11-20 13:39:19.923703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.253 [2024-11-20 13:39:20.056085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.195 13:39:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.195 13:39:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:19:18.195 13:39:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58930 00:19:18.195 13:39:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:18.195 13:39:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58930 00:19:18.762 13:39:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58930 00:19:18.762 13:39:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58930 ']' 00:19:18.762 13:39:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58930 00:19:18.762 13:39:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:19:18.762 13:39:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.762 13:39:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58930 00:19:18.762 killing process with pid 58930 00:19:18.762 13:39:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:18.762 13:39:21 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:18.762 13:39:21 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58930' 00:19:18.762 13:39:21 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58930 00:19:18.762 13:39:21 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58930 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58930 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58930 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:19:21.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.290 ERROR: process (pid: 58930) is no longer running 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58930 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58930 ']' 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:19:21.290 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58930) - No such process 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:19:21.290 00:19:21.290 real 0m4.019s 00:19:21.290 user 0m4.017s 00:19:21.290 sys 0m0.719s 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.290 13:39:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:19:21.290 ************************************ 00:19:21.290 END TEST default_locks 00:19:21.290 ************************************ 00:19:21.290 13:39:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:19:21.290 13:39:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:21.290 13:39:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.290 13:39:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:21.290 ************************************ 00:19:21.290 START TEST default_locks_via_rpc 00:19:21.290 ************************************ 00:19:21.290 13:39:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:19:21.290 13:39:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59007 00:19:21.290 13:39:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59007 00:19:21.290 13:39:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:21.290 13:39:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59007 ']' 00:19:21.290 13:39:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.290 13:39:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.290 13:39:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.290 13:39:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.290 13:39:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:21.290 [2024-11-20 13:39:23.835350] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:19:21.290 [2024-11-20 13:39:23.835558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59007 ] 00:19:21.290 [2024-11-20 13:39:24.021529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.291 [2024-11-20 13:39:24.154435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.226 13:39:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.226 13:39:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:22.226 13:39:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:19:22.226 13:39:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.226 13:39:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.226 13:39:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.226 13:39:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:19:22.226 13:39:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:19:22.226 13:39:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:19:22.226 13:39:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:19:22.226 13:39:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:19:22.226 13:39:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.226 13:39:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.226 13:39:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.226 13:39:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59007 00:19:22.226 13:39:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59007 00:19:22.226 13:39:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:22.791 13:39:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59007 00:19:22.791 13:39:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59007 ']' 00:19:22.791 13:39:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59007 00:19:22.791 13:39:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:19:22.791 13:39:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.791 13:39:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59007 00:19:22.791 killing process with pid 59007 00:19:22.791 13:39:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:22.791 13:39:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:22.791 13:39:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59007' 00:19:22.791 13:39:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59007 00:19:22.791 13:39:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59007 00:19:25.321 ************************************ 00:19:25.321 END TEST default_locks_via_rpc 00:19:25.321 ************************************ 00:19:25.321 00:19:25.321 real 0m3.948s 00:19:25.321 user 0m3.924s 00:19:25.321 sys 0m0.717s 00:19:25.321 13:39:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:25.321 13:39:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:25.321 13:39:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:19:25.321 13:39:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:25.321 13:39:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:25.321 13:39:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:25.321 ************************************ 00:19:25.321 START TEST non_locking_app_on_locked_coremask 00:19:25.321 ************************************ 00:19:25.321 13:39:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:19:25.321 13:39:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59081 00:19:25.321 13:39:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:25.321 13:39:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59081 /var/tmp/spdk.sock 00:19:25.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.321 13:39:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59081 ']' 00:19:25.321 13:39:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.321 13:39:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.321 13:39:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.321 13:39:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.321 13:39:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:25.321 [2024-11-20 13:39:27.838659] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:19:25.321 [2024-11-20 13:39:27.839197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59081 ] 00:19:25.321 [2024-11-20 13:39:28.024838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.321 [2024-11-20 13:39:28.151364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:26.256 13:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.256 13:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:19:26.256 13:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59098 00:19:26.256 13:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:19:26.256 13:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59098 /var/tmp/spdk2.sock 00:19:26.256 13:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59098 ']' 00:19:26.256 13:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:26.256 13:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.256 13:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:26.256 13:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.256 13:39:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:26.256 [2024-11-20 13:39:29.149885] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:19:26.256 [2024-11-20 13:39:29.150327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59098 ] 00:19:26.520 [2024-11-20 13:39:29.355973] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:19:26.520 [2024-11-20 13:39:29.356094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.789 [2024-11-20 13:39:29.619254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.326 13:39:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.326 13:39:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:19:29.326 13:39:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59081 00:19:29.326 13:39:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59081 00:19:29.326 13:39:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:30.263 13:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59081 00:19:30.263 13:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59081 ']' 00:19:30.263 13:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59081 00:19:30.263 13:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:19:30.263 13:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.263 13:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59081 00:19:30.263 13:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:30.263 13:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:30.263 killing process with pid 59081 00:19:30.263 13:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59081' 00:19:30.263 13:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59081 00:19:30.263 13:39:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59081 00:19:34.452 13:39:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59098 00:19:34.452 13:39:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59098 ']' 00:19:34.452 13:39:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59098 00:19:34.452 13:39:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:19:34.452 13:39:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.452 13:39:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59098 00:19:34.452 killing process with pid 59098 00:19:34.452 13:39:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:34.452 13:39:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:34.452 13:39:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59098' 00:19:34.452 13:39:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59098 00:19:34.452 13:39:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59098 00:19:36.986 ************************************ 00:19:36.986 END TEST non_locking_app_on_locked_coremask 00:19:36.986 ************************************ 00:19:36.986 00:19:36.986 real 0m11.784s 00:19:36.986 user 0m12.364s 00:19:36.986 sys 0m1.584s 00:19:36.986 13:39:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:36.986 13:39:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:36.986 13:39:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:19:36.986 13:39:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:36.986 13:39:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:36.986 13:39:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:36.986 ************************************ 00:19:36.986 START TEST locking_app_on_unlocked_coremask 00:19:36.986 ************************************ 00:19:36.986 13:39:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:19:36.986 13:39:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59255 00:19:36.986 13:39:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:19:36.986 13:39:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59255 /var/tmp/spdk.sock 00:19:36.986 13:39:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59255 ']' 00:19:36.986 13:39:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.986 13:39:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:36.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.986 13:39:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.986 13:39:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:36.986 13:39:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:36.986 [2024-11-20 13:39:39.681433] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:19:36.986 [2024-11-20 13:39:39.681631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59255 ] 00:19:36.986 [2024-11-20 13:39:39.866376] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:19:36.986 [2024-11-20 13:39:39.866730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.244 [2024-11-20 13:39:40.030091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.181 13:39:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:38.181 13:39:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:19:38.181 13:39:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59278 00:19:38.181 13:39:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:19:38.181 13:39:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59278 /var/tmp/spdk2.sock 00:19:38.181 13:39:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59278 ']' 00:19:38.181 13:39:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:38.181 13:39:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:38.181 13:39:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:38.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:38.181 13:39:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:38.181 13:39:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:38.181 [2024-11-20 13:39:41.081666] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:19:38.181 [2024-11-20 13:39:41.082168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59278 ] 00:19:38.439 [2024-11-20 13:39:41.288858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.697 [2024-11-20 13:39:41.570673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.229 13:39:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:41.229 13:39:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:19:41.229 13:39:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59278 00:19:41.229 13:39:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59278 00:19:41.229 13:39:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:42.166 13:39:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59255 00:19:42.166 13:39:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59255 ']' 00:19:42.166 13:39:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59255 00:19:42.166 13:39:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:19:42.166 13:39:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.166 13:39:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59255 00:19:42.166 killing process with pid 59255 00:19:42.166 13:39:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:42.166 13:39:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:42.166 13:39:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59255' 00:19:42.166 13:39:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59255 00:19:42.166 13:39:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59255 00:19:46.354 13:39:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59278 00:19:46.354 13:39:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59278 ']' 00:19:46.354 13:39:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59278 00:19:46.354 13:39:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:19:46.354 13:39:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.354 13:39:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59278 00:19:46.354 killing process with pid 59278 00:19:46.354 13:39:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:46.354 13:39:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:46.354 13:39:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59278' 00:19:46.354 13:39:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59278 00:19:46.354 13:39:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59278 00:19:48.886 00:19:48.886 real 0m11.891s 00:19:48.886 user 0m12.435s 00:19:48.886 sys 0m1.595s 00:19:48.886 13:39:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:48.886 13:39:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:48.886 ************************************ 00:19:48.886 END TEST locking_app_on_unlocked_coremask 00:19:48.886 ************************************ 00:19:48.886 13:39:51 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:19:48.886 13:39:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:48.886 13:39:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:48.886 13:39:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:48.886 ************************************ 00:19:48.886 START TEST locking_app_on_locked_coremask 00:19:48.886 ************************************ 00:19:48.886 13:39:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:19:48.886 13:39:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59427 00:19:48.886 13:39:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59427 /var/tmp/spdk.sock 00:19:48.886 13:39:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:48.886 13:39:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59427 ']' 00:19:48.886 13:39:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.886 13:39:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.887 13:39:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.887 13:39:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.887 13:39:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:48.887 [2024-11-20 13:39:51.626712] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:19:48.887 [2024-11-20 13:39:51.627401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59427 ] 00:19:49.145 [2024-11-20 13:39:51.811818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.145 [2024-11-20 13:39:51.942841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.083 13:39:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.083 13:39:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:19:50.083 13:39:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59443 00:19:50.083 13:39:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:19:50.083 13:39:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59443 /var/tmp/spdk2.sock 00:19:50.083 13:39:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:19:50.083 13:39:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59443 /var/tmp/spdk2.sock 00:19:50.083 13:39:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:19:50.083 13:39:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.083 13:39:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:19:50.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:50.083 13:39:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.083 13:39:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59443 /var/tmp/spdk2.sock 00:19:50.083 13:39:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59443 ']' 00:19:50.083 13:39:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:50.083 13:39:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.083 13:39:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:50.083 13:39:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.083 13:39:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:50.083 [2024-11-20 13:39:52.943718] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:19:50.083 [2024-11-20 13:39:52.943934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59443 ] 00:19:50.342 [2024-11-20 13:39:53.148984] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59427 has claimed it. 00:19:50.342 [2024-11-20 13:39:53.149101] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:19:50.910 ERROR: process (pid: 59443) is no longer running 00:19:50.910 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59443) - No such process 00:19:50.910 13:39:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.910 13:39:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:19:50.910 13:39:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:19:50.910 13:39:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:50.910 13:39:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:50.911 13:39:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:50.911 13:39:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59427 00:19:50.911 13:39:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59427 00:19:50.911 13:39:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:51.171 13:39:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59427 00:19:51.171 13:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59427 ']' 00:19:51.171 13:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59427 00:19:51.171 13:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:19:51.171 13:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.171 13:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59427 00:19:51.171 killing process with pid 59427 00:19:51.172 13:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:51.172 13:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:51.172 13:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59427' 00:19:51.172 13:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59427 00:19:51.172 13:39:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59427 00:19:53.705 ************************************ 00:19:53.705 END TEST locking_app_on_locked_coremask 00:19:53.705 ************************************ 00:19:53.705 00:19:53.705 real 0m4.688s 00:19:53.705 user 0m5.007s 00:19:53.705 sys 0m0.918s 00:19:53.705 13:39:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:53.705 13:39:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:53.705 13:39:56 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:19:53.705 13:39:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:53.705 13:39:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:53.705 13:39:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:53.705 ************************************ 00:19:53.705 START TEST locking_overlapped_coremask 00:19:53.705 ************************************ 00:19:53.705 13:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:19:53.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.705 13:39:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59513 00:19:53.705 13:39:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59513 /var/tmp/spdk.sock 00:19:53.705 13:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59513 ']' 00:19:53.705 13:39:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:53.705 13:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.705 13:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.705 13:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.705 13:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.705 13:39:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:53.705 [2024-11-20 13:39:56.395408] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:19:53.705 [2024-11-20 13:39:56.395696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59513 ] 00:19:53.705 [2024-11-20 13:39:56.586158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:53.964 [2024-11-20 13:39:56.759888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.964 [2024-11-20 13:39:56.760061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.964 [2024-11-20 13:39:56.760103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.898 13:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.898 13:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:19:54.898 13:39:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59536 00:19:54.898 13:39:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59536 /var/tmp/spdk2.sock 00:19:54.898 13:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:19:54.898 13:39:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:19:54.898 13:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59536 /var/tmp/spdk2.sock 00:19:54.898 13:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:19:54.898 13:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:54.898 13:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:19:54.898 13:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:54.898 13:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59536 /var/tmp/spdk2.sock 00:19:54.898 13:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59536 ']' 00:19:54.898 13:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:54.898 13:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.898 13:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:54.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:54.898 13:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.898 13:39:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:54.898 [2024-11-20 13:39:57.766599] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:19:54.898 [2024-11-20 13:39:57.767137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59536 ] 00:19:55.157 [2024-11-20 13:39:57.969429] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59513 has claimed it. 00:19:55.157 [2024-11-20 13:39:57.969539] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:19:55.724 ERROR: process (pid: 59536) is no longer running 00:19:55.724 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59536) - No such process 00:19:55.724 13:39:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.724 13:39:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:19:55.724 13:39:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:19:55.724 13:39:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:55.724 13:39:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:55.724 13:39:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:55.724 13:39:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:19:55.724 13:39:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:19:55.724 13:39:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:19:55.724 13:39:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:19:55.724 13:39:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59513 00:19:55.724 13:39:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59513 ']' 00:19:55.724 13:39:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59513 00:19:55.724 13:39:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:19:55.724 13:39:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:55.724 13:39:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59513 00:19:55.724 13:39:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:55.724 13:39:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:55.724 13:39:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59513' 00:19:55.724 killing process with pid 59513 00:19:55.724 13:39:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59513 00:19:55.724 13:39:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59513 00:19:58.256 00:19:58.256 real 0m4.490s 00:19:58.256 user 0m12.049s 00:19:58.256 sys 0m0.737s 00:19:58.256 13:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:58.256 13:40:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:58.256 ************************************ 00:19:58.256 END TEST locking_overlapped_coremask 00:19:58.256 ************************************ 00:19:58.256 13:40:00 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:19:58.256 13:40:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:58.256 13:40:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:58.256 13:40:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:58.256 ************************************ 00:19:58.256 START TEST locking_overlapped_coremask_via_rpc 00:19:58.256 ************************************ 00:19:58.256 13:40:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:19:58.256 13:40:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59600 00:19:58.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.256 13:40:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59600 /var/tmp/spdk.sock 00:19:58.256 13:40:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:19:58.256 13:40:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59600 ']' 00:19:58.256 13:40:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.256 13:40:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.256 13:40:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.256 13:40:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.256 13:40:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:58.256 [2024-11-20 13:40:00.916978] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:19:58.256 [2024-11-20 13:40:00.917176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59600 ] 00:19:58.256 [2024-11-20 13:40:01.103414] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:19:58.256 [2024-11-20 13:40:01.103555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:58.514 [2024-11-20 13:40:01.286442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.514 [2024-11-20 13:40:01.286694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.514 [2024-11-20 13:40:01.286743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.449 13:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.449 13:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:59.449 13:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59624 00:19:59.449 13:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:19:59.449 13:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59624 /var/tmp/spdk2.sock 00:19:59.449 13:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59624 ']' 00:19:59.449 13:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:59.449 13:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.449 13:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:59.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:59.449 13:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.449 13:40:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:59.449 [2024-11-20 13:40:02.302356] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:19:59.449 [2024-11-20 13:40:02.303066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59624 ] 00:19:59.707 [2024-11-20 13:40:02.509945] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:19:59.707 [2024-11-20 13:40:02.510031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:59.975 [2024-11-20 13:40:02.818860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:59.975 [2024-11-20 13:40:02.818943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:59.975 [2024-11-20 13:40:02.818945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:02.511 [2024-11-20 13:40:05.120248] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59600 has claimed it. 00:20:02.511 request: 00:20:02.511 { 00:20:02.511 "method": "framework_enable_cpumask_locks", 00:20:02.511 "req_id": 1 00:20:02.511 } 00:20:02.511 Got JSON-RPC error response 00:20:02.511 response: 00:20:02.511 { 00:20:02.511 "code": -32603, 00:20:02.511 "message": "Failed to claim CPU core: 2" 00:20:02.511 } 00:20:02.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59600 /var/tmp/spdk.sock 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59600 ']' 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:02.511 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:02.769 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.769 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:02.769 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59624 /var/tmp/spdk2.sock 00:20:02.769 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59624 ']' 00:20:02.769 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:20:02.769 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:02.769 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:20:02.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:20:02.769 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:02.769 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:03.027 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:03.027 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:03.027 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:20:03.027 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:20:03.027 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:20:03.027 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:20:03.027 00:20:03.027 real 0m4.941s 00:20:03.027 user 0m1.851s 00:20:03.027 sys 0m0.210s 00:20:03.027 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:03.027 13:40:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:03.027 ************************************ 00:20:03.027 END TEST locking_overlapped_coremask_via_rpc 00:20:03.027 ************************************ 00:20:03.027 13:40:05 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:20:03.027 13:40:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59600 ]] 00:20:03.027 13:40:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59600 00:20:03.027 13:40:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59600 ']' 00:20:03.027 13:40:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59600 00:20:03.027 13:40:05 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:20:03.027 13:40:05 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.027 13:40:05 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59600 00:20:03.027 killing process with pid 59600 00:20:03.027 13:40:05 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:03.027 13:40:05 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:03.027 13:40:05 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59600' 00:20:03.027 13:40:05 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59600 00:20:03.027 13:40:05 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59600 00:20:05.557 13:40:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59624 ]] 00:20:05.557 13:40:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59624 00:20:05.557 13:40:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59624 ']' 00:20:05.557 13:40:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59624 00:20:05.557 13:40:08 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:20:05.557 13:40:08 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.557 13:40:08 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59624 00:20:05.557 killing process with pid 59624 00:20:05.557 13:40:08 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:05.557 13:40:08 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:05.557 13:40:08 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59624' 00:20:05.557 13:40:08 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59624 00:20:05.557 13:40:08 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59624 00:20:08.086 13:40:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:20:08.086 Process with pid 59600 is not found 00:20:08.086 Process with pid 59624 is not found 00:20:08.086 13:40:10 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:20:08.086 13:40:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59600 ]] 00:20:08.086 13:40:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59600 00:20:08.086 13:40:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59600 ']' 00:20:08.086 13:40:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59600 00:20:08.086 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59600) - No such process 00:20:08.086 13:40:10 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59600 is not found' 00:20:08.086 13:40:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59624 ]] 00:20:08.086 13:40:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59624 00:20:08.086 13:40:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59624 ']' 00:20:08.086 13:40:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59624 00:20:08.086 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59624) - No such process 00:20:08.086 13:40:10 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59624 is not found' 00:20:08.086 13:40:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:20:08.086 ************************************ 00:20:08.086 END TEST cpu_locks 00:20:08.086 ************************************ 00:20:08.086 00:20:08.086 real 0m51.230s 00:20:08.086 user 1m28.868s 00:20:08.086 sys 0m7.885s 00:20:08.086 13:40:10 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:08.086 13:40:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:20:08.086 ************************************ 00:20:08.086 END TEST event 00:20:08.086 ************************************ 00:20:08.086 00:20:08.086 real 1m24.063s 00:20:08.086 user 2m34.313s 00:20:08.086 sys 0m12.125s 00:20:08.086 13:40:10 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:08.086 13:40:10 event -- common/autotest_common.sh@10 -- # set +x 00:20:08.086 13:40:10 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:20:08.086 13:40:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:08.086 13:40:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:08.086 13:40:10 -- common/autotest_common.sh@10 -- # set +x 00:20:08.086 ************************************ 00:20:08.086 START TEST thread 00:20:08.086 ************************************ 00:20:08.087 13:40:10 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:20:08.087 * Looking for test storage... 00:20:08.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:20:08.087 13:40:10 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:08.087 13:40:10 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:20:08.087 13:40:10 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:08.087 13:40:10 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:08.087 13:40:10 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:08.087 13:40:10 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:08.087 13:40:10 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:08.087 13:40:10 thread -- scripts/common.sh@336 -- # IFS=.-: 00:20:08.087 13:40:10 thread -- scripts/common.sh@336 -- # read -ra ver1 00:20:08.087 13:40:10 thread -- scripts/common.sh@337 -- # IFS=.-: 00:20:08.087 13:40:10 thread -- scripts/common.sh@337 -- # read -ra ver2 00:20:08.087 13:40:10 thread -- scripts/common.sh@338 -- # local 'op=<' 00:20:08.087 13:40:10 thread -- scripts/common.sh@340 -- # ver1_l=2 00:20:08.087 13:40:10 thread -- scripts/common.sh@341 -- # ver2_l=1 00:20:08.087 13:40:10 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:08.087 13:40:10 thread -- scripts/common.sh@344 -- # case "$op" in 00:20:08.087 13:40:10 thread -- scripts/common.sh@345 -- # : 1 00:20:08.087 13:40:10 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:08.087 13:40:10 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:08.087 13:40:10 thread -- scripts/common.sh@365 -- # decimal 1 00:20:08.087 13:40:10 thread -- scripts/common.sh@353 -- # local d=1 00:20:08.087 13:40:10 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:08.087 13:40:10 thread -- scripts/common.sh@355 -- # echo 1 00:20:08.087 13:40:10 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:20:08.087 13:40:10 thread -- scripts/common.sh@366 -- # decimal 2 00:20:08.087 13:40:10 thread -- scripts/common.sh@353 -- # local d=2 00:20:08.087 13:40:10 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:08.087 13:40:10 thread -- scripts/common.sh@355 -- # echo 2 00:20:08.087 13:40:10 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:20:08.087 13:40:10 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:08.087 13:40:10 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:08.087 13:40:10 thread -- scripts/common.sh@368 -- # return 0 00:20:08.087 13:40:10 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:08.087 13:40:10 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:08.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.087 --rc genhtml_branch_coverage=1 00:20:08.087 --rc genhtml_function_coverage=1 00:20:08.087 --rc genhtml_legend=1 00:20:08.087 --rc geninfo_all_blocks=1 00:20:08.087 --rc geninfo_unexecuted_blocks=1 00:20:08.087 00:20:08.087 ' 00:20:08.087 13:40:10 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:08.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.087 --rc genhtml_branch_coverage=1 00:20:08.087 --rc genhtml_function_coverage=1 00:20:08.087 --rc genhtml_legend=1 00:20:08.087 --rc geninfo_all_blocks=1 00:20:08.087 --rc geninfo_unexecuted_blocks=1 00:20:08.087 00:20:08.087 ' 00:20:08.087 13:40:10 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:08.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.087 --rc genhtml_branch_coverage=1 00:20:08.087 --rc genhtml_function_coverage=1 00:20:08.087 --rc genhtml_legend=1 00:20:08.087 --rc geninfo_all_blocks=1 00:20:08.087 --rc geninfo_unexecuted_blocks=1 00:20:08.087 00:20:08.087 ' 00:20:08.087 13:40:10 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:08.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:08.087 --rc genhtml_branch_coverage=1 00:20:08.087 --rc genhtml_function_coverage=1 00:20:08.087 --rc genhtml_legend=1 00:20:08.087 --rc geninfo_all_blocks=1 00:20:08.087 --rc geninfo_unexecuted_blocks=1 00:20:08.087 00:20:08.087 ' 00:20:08.087 13:40:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:20:08.087 13:40:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:20:08.087 13:40:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:08.087 13:40:10 thread -- common/autotest_common.sh@10 -- # set +x 00:20:08.087 ************************************ 00:20:08.087 START TEST thread_poller_perf 00:20:08.087 ************************************ 00:20:08.087 13:40:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:20:08.087 [2024-11-20 13:40:10.983034] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:20:08.087 [2024-11-20 13:40:10.983460] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59819 ] 00:20:08.345 [2024-11-20 13:40:11.174793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.604 [2024-11-20 13:40:11.343326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.604 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:20:09.993 [2024-11-20T13:40:12.910Z] ====================================== 00:20:09.993 [2024-11-20T13:40:12.910Z] busy:2210912386 (cyc) 00:20:09.993 [2024-11-20T13:40:12.910Z] total_run_count: 310000 00:20:09.993 [2024-11-20T13:40:12.910Z] tsc_hz: 2200000000 (cyc) 00:20:09.993 [2024-11-20T13:40:12.910Z] ====================================== 00:20:09.993 [2024-11-20T13:40:12.910Z] poller_cost: 7131 (cyc), 3241 (nsec) 00:20:09.993 00:20:09.993 ************************************ 00:20:09.993 END TEST thread_poller_perf 00:20:09.993 ************************************ 00:20:09.993 real 0m1.658s 00:20:09.993 user 0m1.428s 00:20:09.993 sys 0m0.116s 00:20:09.993 13:40:12 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:09.993 13:40:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:20:09.993 13:40:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:20:09.993 13:40:12 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:20:09.993 13:40:12 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:09.993 13:40:12 thread -- common/autotest_common.sh@10 -- # set +x 00:20:09.993 ************************************ 00:20:09.993 START TEST thread_poller_perf 00:20:09.993 ************************************ 00:20:09.993 13:40:12 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:20:09.993 [2024-11-20 13:40:12.687235] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:20:09.993 [2024-11-20 13:40:12.687665] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59861 ] 00:20:09.993 [2024-11-20 13:40:12.859380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.251 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:20:10.251 [2024-11-20 13:40:12.982104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.627 [2024-11-20T13:40:14.544Z] ====================================== 00:20:11.627 [2024-11-20T13:40:14.544Z] busy:2204262882 (cyc) 00:20:11.627 [2024-11-20T13:40:14.544Z] total_run_count: 3885000 00:20:11.627 [2024-11-20T13:40:14.544Z] tsc_hz: 2200000000 (cyc) 00:20:11.627 [2024-11-20T13:40:14.544Z] ====================================== 00:20:11.627 [2024-11-20T13:40:14.544Z] poller_cost: 567 (cyc), 257 (nsec) 00:20:11.627 00:20:11.627 real 0m1.564s 00:20:11.627 user 0m1.349s 00:20:11.627 sys 0m0.104s 00:20:11.627 13:40:14 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:11.627 ************************************ 00:20:11.627 END TEST thread_poller_perf 00:20:11.627 ************************************ 00:20:11.627 13:40:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:20:11.627 13:40:14 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:20:11.627 ************************************ 00:20:11.627 END TEST thread 00:20:11.627 ************************************ 00:20:11.627 00:20:11.627 real 0m3.503s 00:20:11.628 user 0m2.905s 00:20:11.628 sys 0m0.374s 00:20:11.628 13:40:14 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:11.628 13:40:14 thread -- common/autotest_common.sh@10 -- # set +x 00:20:11.628 13:40:14 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:20:11.628 13:40:14 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:20:11.628 13:40:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:11.628 13:40:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:11.628 13:40:14 -- common/autotest_common.sh@10 -- # set +x 00:20:11.628 ************************************ 00:20:11.628 START TEST app_cmdline 00:20:11.628 ************************************ 00:20:11.628 13:40:14 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:20:11.628 * Looking for test storage... 00:20:11.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:20:11.628 13:40:14 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:11.628 13:40:14 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:20:11.628 13:40:14 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:11.628 13:40:14 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@345 -- # : 1 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:11.628 13:40:14 app_cmdline -- scripts/common.sh@368 -- # return 0 00:20:11.628 13:40:14 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:11.628 13:40:14 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:11.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.628 --rc genhtml_branch_coverage=1 00:20:11.628 --rc genhtml_function_coverage=1 00:20:11.628 --rc genhtml_legend=1 00:20:11.628 --rc geninfo_all_blocks=1 00:20:11.628 --rc geninfo_unexecuted_blocks=1 00:20:11.628 00:20:11.628 ' 00:20:11.628 13:40:14 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:11.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.628 --rc genhtml_branch_coverage=1 00:20:11.628 --rc genhtml_function_coverage=1 00:20:11.628 --rc genhtml_legend=1 00:20:11.628 --rc geninfo_all_blocks=1 00:20:11.628 --rc geninfo_unexecuted_blocks=1 00:20:11.628 00:20:11.628 ' 00:20:11.628 13:40:14 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:11.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.628 --rc genhtml_branch_coverage=1 00:20:11.628 --rc genhtml_function_coverage=1 00:20:11.628 --rc genhtml_legend=1 00:20:11.628 --rc geninfo_all_blocks=1 00:20:11.628 --rc geninfo_unexecuted_blocks=1 00:20:11.628 00:20:11.628 ' 00:20:11.628 13:40:14 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:11.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:11.628 --rc genhtml_branch_coverage=1 00:20:11.628 --rc genhtml_function_coverage=1 00:20:11.628 --rc genhtml_legend=1 00:20:11.628 --rc geninfo_all_blocks=1 00:20:11.628 --rc geninfo_unexecuted_blocks=1 00:20:11.628 00:20:11.628 ' 00:20:11.628 13:40:14 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:20:11.628 13:40:14 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59949 00:20:11.628 13:40:14 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:20:11.628 13:40:14 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59949 00:20:11.628 13:40:14 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59949 ']' 00:20:11.628 13:40:14 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.628 13:40:14 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.628 13:40:14 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.628 13:40:14 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.628 13:40:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:20:11.887 [2024-11-20 13:40:14.626834] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:20:11.887 [2024-11-20 13:40:14.627090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59949 ] 00:20:12.146 [2024-11-20 13:40:14.811164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.146 [2024-11-20 13:40:14.937750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.083 13:40:15 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.083 13:40:15 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:20:13.083 13:40:15 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:20:13.342 { 00:20:13.342 "version": "SPDK v25.01-pre git sha1 fa4f4fd15", 00:20:13.342 "fields": { 00:20:13.342 "major": 25, 00:20:13.342 "minor": 1, 00:20:13.342 "patch": 0, 00:20:13.342 "suffix": "-pre", 00:20:13.342 "commit": "fa4f4fd15" 00:20:13.342 } 00:20:13.342 } 00:20:13.342 13:40:16 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:20:13.342 13:40:16 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:20:13.342 13:40:16 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:20:13.342 13:40:16 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:20:13.342 13:40:16 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:20:13.342 13:40:16 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.342 13:40:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:20:13.342 13:40:16 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:20:13.342 13:40:16 app_cmdline -- app/cmdline.sh@26 -- # sort 00:20:13.342 13:40:16 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.342 13:40:16 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:20:13.342 13:40:16 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:20:13.342 13:40:16 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:20:13.342 13:40:16 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:20:13.342 13:40:16 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:20:13.342 13:40:16 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:13.342 13:40:16 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:13.342 13:40:16 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:13.342 13:40:16 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:13.342 13:40:16 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:13.342 13:40:16 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:13.342 13:40:16 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:13.342 13:40:16 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:13.342 13:40:16 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:20:13.603 request: 00:20:13.603 { 00:20:13.603 "method": "env_dpdk_get_mem_stats", 00:20:13.603 "req_id": 1 00:20:13.603 } 00:20:13.603 Got JSON-RPC error response 00:20:13.603 response: 00:20:13.603 { 00:20:13.603 "code": -32601, 00:20:13.603 "message": "Method not found" 00:20:13.603 } 00:20:13.603 13:40:16 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:20:13.603 13:40:16 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:13.603 13:40:16 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:13.603 13:40:16 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:13.603 13:40:16 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59949 00:20:13.603 13:40:16 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59949 ']' 00:20:13.603 13:40:16 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59949 00:20:13.603 13:40:16 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:20:13.603 13:40:16 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.603 13:40:16 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59949 00:20:13.603 killing process with pid 59949 00:20:13.603 13:40:16 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:13.603 13:40:16 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:13.603 13:40:16 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59949' 00:20:13.603 13:40:16 app_cmdline -- common/autotest_common.sh@973 -- # kill 59949 00:20:13.603 13:40:16 app_cmdline -- common/autotest_common.sh@978 -- # wait 59949 00:20:16.138 00:20:16.138 real 0m4.437s 00:20:16.138 user 0m4.872s 00:20:16.138 sys 0m0.667s 00:20:16.138 13:40:18 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:16.138 ************************************ 00:20:16.138 END TEST app_cmdline 00:20:16.138 ************************************ 00:20:16.138 13:40:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:20:16.138 13:40:18 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:20:16.138 13:40:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:16.138 13:40:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.138 13:40:18 -- common/autotest_common.sh@10 -- # set +x 00:20:16.138 ************************************ 00:20:16.138 START TEST version 00:20:16.138 ************************************ 00:20:16.138 13:40:18 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:20:16.138 * Looking for test storage... 00:20:16.138 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:20:16.138 13:40:18 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:16.138 13:40:18 version -- common/autotest_common.sh@1693 -- # lcov --version 00:20:16.138 13:40:18 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:16.138 13:40:18 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:16.138 13:40:18 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:16.138 13:40:18 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:16.138 13:40:18 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:16.138 13:40:18 version -- scripts/common.sh@336 -- # IFS=.-: 00:20:16.138 13:40:18 version -- scripts/common.sh@336 -- # read -ra ver1 00:20:16.138 13:40:18 version -- scripts/common.sh@337 -- # IFS=.-: 00:20:16.139 13:40:18 version -- scripts/common.sh@337 -- # read -ra ver2 00:20:16.139 13:40:18 version -- scripts/common.sh@338 -- # local 'op=<' 00:20:16.139 13:40:18 version -- scripts/common.sh@340 -- # ver1_l=2 00:20:16.139 13:40:18 version -- scripts/common.sh@341 -- # ver2_l=1 00:20:16.139 13:40:18 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:16.139 13:40:18 version -- scripts/common.sh@344 -- # case "$op" in 00:20:16.139 13:40:18 version -- scripts/common.sh@345 -- # : 1 00:20:16.139 13:40:18 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:16.139 13:40:18 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.139 13:40:18 version -- scripts/common.sh@365 -- # decimal 1 00:20:16.139 13:40:18 version -- scripts/common.sh@353 -- # local d=1 00:20:16.139 13:40:18 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:16.139 13:40:18 version -- scripts/common.sh@355 -- # echo 1 00:20:16.139 13:40:18 version -- scripts/common.sh@365 -- # ver1[v]=1 00:20:16.139 13:40:18 version -- scripts/common.sh@366 -- # decimal 2 00:20:16.139 13:40:19 version -- scripts/common.sh@353 -- # local d=2 00:20:16.139 13:40:19 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:16.139 13:40:19 version -- scripts/common.sh@355 -- # echo 2 00:20:16.139 13:40:19 version -- scripts/common.sh@366 -- # ver2[v]=2 00:20:16.139 13:40:19 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:16.139 13:40:19 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:16.139 13:40:19 version -- scripts/common.sh@368 -- # return 0 00:20:16.139 13:40:19 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:16.139 13:40:19 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:16.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.139 --rc genhtml_branch_coverage=1 00:20:16.139 --rc genhtml_function_coverage=1 00:20:16.139 --rc genhtml_legend=1 00:20:16.139 --rc geninfo_all_blocks=1 00:20:16.139 --rc geninfo_unexecuted_blocks=1 00:20:16.139 00:20:16.139 ' 00:20:16.139 13:40:19 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:16.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.139 --rc genhtml_branch_coverage=1 00:20:16.139 --rc genhtml_function_coverage=1 00:20:16.139 --rc genhtml_legend=1 00:20:16.139 --rc geninfo_all_blocks=1 00:20:16.139 --rc geninfo_unexecuted_blocks=1 00:20:16.139 00:20:16.139 ' 00:20:16.139 13:40:19 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:16.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.139 --rc genhtml_branch_coverage=1 00:20:16.139 --rc genhtml_function_coverage=1 00:20:16.139 --rc genhtml_legend=1 00:20:16.139 --rc geninfo_all_blocks=1 00:20:16.139 --rc geninfo_unexecuted_blocks=1 00:20:16.139 00:20:16.139 ' 00:20:16.139 13:40:19 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:16.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.139 --rc genhtml_branch_coverage=1 00:20:16.139 --rc genhtml_function_coverage=1 00:20:16.139 --rc genhtml_legend=1 00:20:16.139 --rc geninfo_all_blocks=1 00:20:16.139 --rc geninfo_unexecuted_blocks=1 00:20:16.139 00:20:16.139 ' 00:20:16.139 13:40:19 version -- app/version.sh@17 -- # get_header_version major 00:20:16.139 13:40:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:20:16.139 13:40:19 version -- app/version.sh@14 -- # cut -f2 00:20:16.139 13:40:19 version -- app/version.sh@14 -- # tr -d '"' 00:20:16.139 13:40:19 version -- app/version.sh@17 -- # major=25 00:20:16.139 13:40:19 version -- app/version.sh@18 -- # get_header_version minor 00:20:16.139 13:40:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:20:16.139 13:40:19 version -- app/version.sh@14 -- # cut -f2 00:20:16.139 13:40:19 version -- app/version.sh@14 -- # tr -d '"' 00:20:16.139 13:40:19 version -- app/version.sh@18 -- # minor=1 00:20:16.139 13:40:19 version -- app/version.sh@19 -- # get_header_version patch 00:20:16.139 13:40:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:20:16.139 13:40:19 version -- app/version.sh@14 -- # cut -f2 00:20:16.139 13:40:19 version -- app/version.sh@14 -- # tr -d '"' 00:20:16.139 13:40:19 version -- app/version.sh@19 -- # patch=0 00:20:16.139 13:40:19 version -- app/version.sh@20 -- # get_header_version suffix 00:20:16.139 13:40:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:20:16.139 13:40:19 version -- app/version.sh@14 -- # cut -f2 00:20:16.139 13:40:19 version -- app/version.sh@14 -- # tr -d '"' 00:20:16.139 13:40:19 version -- app/version.sh@20 -- # suffix=-pre 00:20:16.139 13:40:19 version -- app/version.sh@22 -- # version=25.1 00:20:16.139 13:40:19 version -- app/version.sh@25 -- # (( patch != 0 )) 00:20:16.139 13:40:19 version -- app/version.sh@28 -- # version=25.1rc0 00:20:16.139 13:40:19 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:20:16.139 13:40:19 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:20:16.417 13:40:19 version -- app/version.sh@30 -- # py_version=25.1rc0 00:20:16.417 13:40:19 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:20:16.417 00:20:16.417 real 0m0.281s 00:20:16.417 user 0m0.186s 00:20:16.417 sys 0m0.128s 00:20:16.417 ************************************ 00:20:16.418 END TEST version 00:20:16.418 ************************************ 00:20:16.418 13:40:19 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:16.418 13:40:19 version -- common/autotest_common.sh@10 -- # set +x 00:20:16.418 13:40:19 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:20:16.418 13:40:19 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:20:16.418 13:40:19 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:20:16.418 13:40:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:16.418 13:40:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.418 13:40:19 -- common/autotest_common.sh@10 -- # set +x 00:20:16.418 ************************************ 00:20:16.418 START TEST bdev_raid 00:20:16.418 ************************************ 00:20:16.418 13:40:19 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:20:16.418 * Looking for test storage... 00:20:16.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:16.418 13:40:19 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:16.418 13:40:19 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:20:16.418 13:40:19 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:16.418 13:40:19 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:16.418 13:40:19 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:16.418 13:40:19 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:16.418 13:40:19 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:16.418 13:40:19 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:20:16.418 13:40:19 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:20:16.418 13:40:19 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:20:16.418 13:40:19 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:20:16.418 13:40:19 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:20:16.418 13:40:19 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:20:16.418 13:40:19 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:20:16.418 13:40:19 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:16.418 13:40:19 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:20:16.418 13:40:19 bdev_raid -- scripts/common.sh@345 -- # : 1 00:20:16.418 13:40:19 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:16.418 13:40:19 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.419 13:40:19 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:20:16.419 13:40:19 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:20:16.419 13:40:19 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:16.419 13:40:19 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:20:16.419 13:40:19 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:16.419 13:40:19 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:20:16.419 13:40:19 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:20:16.419 13:40:19 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:16.419 13:40:19 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:20:16.419 13:40:19 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:16.419 13:40:19 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:16.419 13:40:19 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:16.419 13:40:19 bdev_raid -- scripts/common.sh@368 -- # return 0 00:20:16.419 13:40:19 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:16.419 13:40:19 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:16.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.419 --rc genhtml_branch_coverage=1 00:20:16.419 --rc genhtml_function_coverage=1 00:20:16.419 --rc genhtml_legend=1 00:20:16.419 --rc geninfo_all_blocks=1 00:20:16.419 --rc geninfo_unexecuted_blocks=1 00:20:16.419 00:20:16.419 ' 00:20:16.419 13:40:19 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:16.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.419 --rc genhtml_branch_coverage=1 00:20:16.419 --rc genhtml_function_coverage=1 00:20:16.419 --rc genhtml_legend=1 00:20:16.419 --rc geninfo_all_blocks=1 00:20:16.419 --rc geninfo_unexecuted_blocks=1 00:20:16.419 00:20:16.419 ' 00:20:16.419 13:40:19 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:16.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.419 --rc genhtml_branch_coverage=1 00:20:16.419 --rc genhtml_function_coverage=1 00:20:16.419 --rc genhtml_legend=1 00:20:16.419 --rc geninfo_all_blocks=1 00:20:16.419 --rc geninfo_unexecuted_blocks=1 00:20:16.419 00:20:16.419 ' 00:20:16.419 13:40:19 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:16.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.419 --rc genhtml_branch_coverage=1 00:20:16.419 --rc genhtml_function_coverage=1 00:20:16.419 --rc genhtml_legend=1 00:20:16.419 --rc geninfo_all_blocks=1 00:20:16.419 --rc geninfo_unexecuted_blocks=1 00:20:16.419 00:20:16.419 ' 00:20:16.419 13:40:19 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:16.419 13:40:19 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:20:16.419 13:40:19 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:20:16.686 13:40:19 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:20:16.686 13:40:19 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:20:16.686 13:40:19 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:20:16.686 13:40:19 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:20:16.686 13:40:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:16.686 13:40:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.686 13:40:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:16.686 ************************************ 00:20:16.686 START TEST raid1_resize_data_offset_test 00:20:16.686 ************************************ 00:20:16.686 13:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:20:16.686 13:40:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60132 00:20:16.686 13:40:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60132' 00:20:16.686 13:40:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:16.686 Process raid pid: 60132 00:20:16.687 13:40:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60132 00:20:16.687 13:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60132 ']' 00:20:16.687 13:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.687 13:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.687 13:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.687 13:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.687 13:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.687 [2024-11-20 13:40:19.455757] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:20:16.687 [2024-11-20 13:40:19.456215] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.945 [2024-11-20 13:40:19.646508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.945 [2024-11-20 13:40:19.785425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.204 [2024-11-20 13:40:20.029637] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:17.204 [2024-11-20 13:40:20.029694] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:17.836 13:40:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:17.836 13:40:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:20:17.836 13:40:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:20:17.836 13:40:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.836 13:40:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.836 malloc0 00:20:17.836 13:40:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.836 13:40:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:20:17.836 13:40:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.836 13:40:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.836 malloc1 00:20:17.836 13:40:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.836 13:40:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:20:17.836 13:40:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.836 13:40:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.836 null0 00:20:17.836 13:40:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.836 13:40:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:20:17.836 13:40:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.836 13:40:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.836 [2024-11-20 13:40:20.637190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:20:17.836 [2024-11-20 13:40:20.639622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:17.836 [2024-11-20 13:40:20.639884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:20:17.836 [2024-11-20 13:40:20.640136] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:17.836 [2024-11-20 13:40:20.640160] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:20:17.836 [2024-11-20 13:40:20.640513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:20:17.836 [2024-11-20 13:40:20.640702] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:17.836 [2024-11-20 13:40:20.640721] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:20:17.837 [2024-11-20 13:40:20.640928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.837 13:40:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.837 13:40:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.837 13:40:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:20:17.837 13:40:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.837 13:40:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.837 13:40:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.837 13:40:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:20:17.837 13:40:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:20:17.837 13:40:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.837 13:40:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.837 [2024-11-20 13:40:20.701374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:20:17.837 13:40:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.837 13:40:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:20:17.837 13:40:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.837 13:40:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.405 malloc2 00:20:18.405 13:40:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.405 13:40:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:20:18.405 13:40:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.405 13:40:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.405 [2024-11-20 13:40:21.224001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:18.405 [2024-11-20 13:40:21.242513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:18.405 13:40:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.405 [2024-11-20 13:40:21.245234] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:20:18.405 13:40:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.405 13:40:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.405 13:40:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:20:18.406 13:40:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.406 13:40:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.406 13:40:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:20:18.406 13:40:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60132 00:20:18.406 13:40:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60132 ']' 00:20:18.406 13:40:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60132 00:20:18.406 13:40:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:20:18.406 13:40:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.406 13:40:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60132 00:20:18.665 killing process with pid 60132 00:20:18.665 13:40:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:18.665 13:40:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:18.665 13:40:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60132' 00:20:18.665 13:40:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60132 00:20:18.665 13:40:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60132 00:20:18.665 [2024-11-20 13:40:21.332166] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:18.665 [2024-11-20 13:40:21.333980] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:20:18.665 [2024-11-20 13:40:21.334103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.665 [2024-11-20 13:40:21.334132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:20:18.665 [2024-11-20 13:40:21.367012] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:18.665 [2024-11-20 13:40:21.367723] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:18.665 [2024-11-20 13:40:21.367760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:20:20.568 [2024-11-20 13:40:22.971018] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:21.135 ************************************ 00:20:21.135 END TEST raid1_resize_data_offset_test 00:20:21.135 ************************************ 00:20:21.135 13:40:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:20:21.135 00:20:21.135 real 0m4.698s 00:20:21.135 user 0m4.643s 00:20:21.135 sys 0m0.685s 00:20:21.135 13:40:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.135 13:40:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.394 13:40:24 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:20:21.394 13:40:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:21.394 13:40:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.394 13:40:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:21.394 ************************************ 00:20:21.394 START TEST raid0_resize_superblock_test 00:20:21.394 ************************************ 00:20:21.394 13:40:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:20:21.394 13:40:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:20:21.394 Process raid pid: 60221 00:20:21.394 13:40:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60221 00:20:21.394 13:40:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60221' 00:20:21.394 13:40:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60221 00:20:21.394 13:40:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60221 ']' 00:20:21.394 13:40:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:21.394 13:40:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.394 13:40:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.394 13:40:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.394 13:40:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.394 13:40:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.394 [2024-11-20 13:40:24.214040] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:20:21.394 [2024-11-20 13:40:24.214224] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.652 [2024-11-20 13:40:24.403450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.652 [2024-11-20 13:40:24.540778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.918 [2024-11-20 13:40:24.758538] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:21.919 [2024-11-20 13:40:24.758613] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:22.486 13:40:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.486 13:40:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:20:22.486 13:40:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:20:22.486 13:40:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.486 13:40:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.052 malloc0 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.052 [2024-11-20 13:40:25.793258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:20:23.052 [2024-11-20 13:40:25.793365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.052 [2024-11-20 13:40:25.793399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:23.052 [2024-11-20 13:40:25.793418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.052 [2024-11-20 13:40:25.796219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.052 [2024-11-20 13:40:25.796271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:20:23.052 pt0 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.052 b768504c-2851-48b9-914d-203a02169efe 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.052 48f72aa4-41ef-4412-aba6-0f9a0a0a1d52 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.052 07905ef6-0c7d-44dd-affd-6222f9c650ab 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.052 [2024-11-20 13:40:25.937165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 48f72aa4-41ef-4412-aba6-0f9a0a0a1d52 is claimed 00:20:23.052 [2024-11-20 13:40:25.937289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 07905ef6-0c7d-44dd-affd-6222f9c650ab is claimed 00:20:23.052 [2024-11-20 13:40:25.937460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:23.052 [2024-11-20 13:40:25.937499] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:20:23.052 [2024-11-20 13:40:25.937800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:23.052 [2024-11-20 13:40:25.938089] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:23.052 [2024-11-20 13:40:25.938124] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:20:23.052 [2024-11-20 13:40:25.938346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:20:23.052 13:40:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.310 13:40:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.310 13:40:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:20:23.310 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:20:23.310 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.310 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:20:23.311 [2024-11-20 13:40:26.061605] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.311 [2024-11-20 13:40:26.117524] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:20:23.311 [2024-11-20 13:40:26.117564] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '48f72aa4-41ef-4412-aba6-0f9a0a0a1d52' was resized: old size 131072, new size 204800 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.311 [2024-11-20 13:40:26.125362] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:20:23.311 [2024-11-20 13:40:26.125393] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '07905ef6-0c7d-44dd-affd-6222f9c650ab' was resized: old size 131072, new size 204800 00:20:23.311 [2024-11-20 13:40:26.125435] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.311 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.571 [2024-11-20 13:40:26.237579] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.571 [2024-11-20 13:40:26.289386] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:20:23.571 [2024-11-20 13:40:26.289502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:20:23.571 [2024-11-20 13:40:26.289530] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:23.571 [2024-11-20 13:40:26.289551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:20:23.571 [2024-11-20 13:40:26.289696] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:23.571 [2024-11-20 13:40:26.289745] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:23.571 [2024-11-20 13:40:26.289764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.571 [2024-11-20 13:40:26.297207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:20:23.571 [2024-11-20 13:40:26.297305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.571 [2024-11-20 13:40:26.297332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:20:23.571 [2024-11-20 13:40:26.297348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.571 [2024-11-20 13:40:26.300514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.571 [2024-11-20 13:40:26.300570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:20:23.571 pt0 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.571 [2024-11-20 13:40:26.303099] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 48f72aa4-41ef-4412-aba6-0f9a0a0a1d52 00:20:23.571 [2024-11-20 13:40:26.303174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 48f72aa4-41ef-4412-aba6-0f9a0a0a1d52 is claimed 00:20:23.571 [2024-11-20 13:40:26.303330] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 07905ef6-0c7d-44dd-affd-6222f9c650ab 00:20:23.571 [2024-11-20 13:40:26.303378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 07905ef6-0c7d-44dd-affd-6222f9c650ab is claimed 00:20:23.571 [2024-11-20 13:40:26.303542] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 07905ef6-0c7d-44dd-affd-6222f9c650ab (2) smaller than existing raid bdev Raid (3) 00:20:23.571 [2024-11-20 13:40:26.303671] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 48f72aa4-41ef-4412-aba6-0f9a0a0a1d52: File exists 00:20:23.571 [2024-11-20 13:40:26.303738] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:23.571 [2024-11-20 13:40:26.303767] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:20:23.571 [2024-11-20 13:40:26.304205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:23.571 [2024-11-20 13:40:26.304426] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:23.571 [2024-11-20 13:40:26.304443] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:20:23.571 [2024-11-20 13:40:26.304637] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.571 [2024-11-20 13:40:26.317549] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60221 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60221 ']' 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60221 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60221 00:20:23.571 killing process with pid 60221 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60221' 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60221 00:20:23.571 [2024-11-20 13:40:26.403079] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:23.571 13:40:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60221 00:20:23.571 [2024-11-20 13:40:26.403186] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:23.571 [2024-11-20 13:40:26.403260] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:23.571 [2024-11-20 13:40:26.403276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:20:24.946 [2024-11-20 13:40:27.706644] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:25.880 ************************************ 00:20:25.880 END TEST raid0_resize_superblock_test 00:20:25.880 ************************************ 00:20:25.880 13:40:28 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:20:25.880 00:20:25.880 real 0m4.664s 00:20:25.880 user 0m4.996s 00:20:25.880 sys 0m0.673s 00:20:25.880 13:40:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:25.880 13:40:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.138 13:40:28 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:20:26.138 13:40:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:26.138 13:40:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:26.138 13:40:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:26.138 ************************************ 00:20:26.138 START TEST raid1_resize_superblock_test 00:20:26.138 ************************************ 00:20:26.138 13:40:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:20:26.138 13:40:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:20:26.138 13:40:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60320 00:20:26.138 Process raid pid: 60320 00:20:26.138 13:40:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60320' 00:20:26.138 13:40:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:26.138 13:40:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60320 00:20:26.138 13:40:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60320 ']' 00:20:26.138 13:40:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.138 13:40:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.138 13:40:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.138 13:40:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.138 13:40:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.138 [2024-11-20 13:40:28.927300] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:20:26.138 [2024-11-20 13:40:28.927531] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.397 [2024-11-20 13:40:29.113766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.397 [2024-11-20 13:40:29.247571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.656 [2024-11-20 13:40:29.460879] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:26.656 [2024-11-20 13:40:29.461247] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:27.223 13:40:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.223 13:40:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:20:27.223 13:40:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:20:27.223 13:40:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.223 13:40:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.790 malloc0 00:20:27.790 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.790 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:20:27.790 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.790 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.790 [2024-11-20 13:40:30.474309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:20:27.790 [2024-11-20 13:40:30.474403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.790 [2024-11-20 13:40:30.474434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:27.790 [2024-11-20 13:40:30.474467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.790 [2024-11-20 13:40:30.477432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.790 [2024-11-20 13:40:30.477655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:20:27.790 pt0 00:20:27.790 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.790 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:20:27.790 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.790 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.790 70c7a573-1ff7-4d21-9dbb-8f0db18dfde6 00:20:27.790 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.790 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:20:27.790 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.790 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.790 fb182853-f856-416c-a5e2-58cc8c258a1f 00:20:27.790 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.790 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:20:27.790 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.790 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.790 9b0c6b55-f47b-4e07-9c7d-294295fbbdc9 00:20:27.790 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.790 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:20:27.791 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:20:27.791 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.791 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.791 [2024-11-20 13:40:30.620266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fb182853-f856-416c-a5e2-58cc8c258a1f is claimed 00:20:27.791 [2024-11-20 13:40:30.620372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9b0c6b55-f47b-4e07-9c7d-294295fbbdc9 is claimed 00:20:27.791 [2024-11-20 13:40:30.620586] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:27.791 [2024-11-20 13:40:30.620610] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:20:27.791 [2024-11-20 13:40:30.620919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:27.791 [2024-11-20 13:40:30.621238] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:27.791 [2024-11-20 13:40:30.621255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:20:27.791 [2024-11-20 13:40:30.621481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.791 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.791 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:20:27.791 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:20:27.791 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.791 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.791 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.791 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:20:27.791 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:20:27.791 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:20:27.791 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.791 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.050 [2024-11-20 13:40:30.752666] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.050 [2024-11-20 13:40:30.804722] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:20:28.050 [2024-11-20 13:40:30.804760] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'fb182853-f856-416c-a5e2-58cc8c258a1f' was resized: old size 131072, new size 204800 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.050 [2024-11-20 13:40:30.812679] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:20:28.050 [2024-11-20 13:40:30.812928] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '9b0c6b55-f47b-4e07-9c7d-294295fbbdc9' was resized: old size 131072, new size 204800 00:20:28.050 [2024-11-20 13:40:30.813013] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.050 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.316 [2024-11-20 13:40:30.961301] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:28.316 13:40:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.316 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:20:28.316 13:40:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:20:28.316 13:40:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:20:28.316 13:40:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:20:28.316 13:40:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.316 13:40:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.316 [2024-11-20 13:40:31.008463] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:20:28.316 [2024-11-20 13:40:31.008781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:20:28.316 [2024-11-20 13:40:31.008977] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:20:28.316 [2024-11-20 13:40:31.009353] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:28.316 [2024-11-20 13:40:31.009858] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:28.316 [2024-11-20 13:40:31.010178] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:28.316 [2024-11-20 13:40:31.010358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:20:28.316 13:40:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.316 13:40:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:20:28.316 13:40:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.316 13:40:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.316 [2024-11-20 13:40:31.016333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:20:28.316 [2024-11-20 13:40:31.016411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.317 [2024-11-20 13:40:31.016444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:20:28.317 [2024-11-20 13:40:31.016467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.317 [2024-11-20 13:40:31.020040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.317 [2024-11-20 13:40:31.020114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:20:28.317 pt0 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.317 [2024-11-20 13:40:31.022845] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev fb182853-f856-416c-a5e2-58cc8c258a1f 00:20:28.317 [2024-11-20 13:40:31.023012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fb182853-f856-416c-a5e2-58cc8c258a1f is claimed 00:20:28.317 [2024-11-20 13:40:31.023200] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 9b0c6b55-f47b-4e07-9c7d-294295fbbdc9 00:20:28.317 [2024-11-20 13:40:31.023242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9b0c6b55-f47b-4e07-9c7d-294295fbbdc9 is claimed 00:20:28.317 [2024-11-20 13:40:31.023509] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 9b0c6b55-f47b-4e07-9c7d-294295fbbdc9 (2) smaller than existing raid bdev Raid (3) 00:20:28.317 [2024-11-20 13:40:31.023584] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev fb182853-f856-416c-a5e2-58cc8c258a1f: File exists 00:20:28.317 [2024-11-20 13:40:31.023700] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:28.317 [2024-11-20 13:40:31.023735] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:28.317 [2024-11-20 13:40:31.024360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:28.317 [2024-11-20 13:40:31.024696] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:28.317 [2024-11-20 13:40:31.024726] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:20:28.317 [2024-11-20 13:40:31.025184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.317 [2024-11-20 13:40:31.037319] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60320 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60320 ']' 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60320 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60320 00:20:28.317 killing process with pid 60320 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60320' 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60320 00:20:28.317 [2024-11-20 13:40:31.112920] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:28.317 13:40:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60320 00:20:28.317 [2024-11-20 13:40:31.113036] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:28.317 [2024-11-20 13:40:31.113124] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:28.317 [2024-11-20 13:40:31.113143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:20:29.704 [2024-11-20 13:40:32.364614] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:30.641 13:40:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:20:30.641 00:20:30.641 real 0m4.593s 00:20:30.641 user 0m4.965s 00:20:30.641 sys 0m0.627s 00:20:30.641 13:40:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:30.641 ************************************ 00:20:30.641 END TEST raid1_resize_superblock_test 00:20:30.641 ************************************ 00:20:30.641 13:40:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.641 13:40:33 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:20:30.641 13:40:33 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:20:30.641 13:40:33 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:20:30.641 13:40:33 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:20:30.641 13:40:33 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:20:30.641 13:40:33 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:20:30.641 13:40:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:30.641 13:40:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:30.641 13:40:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:30.641 ************************************ 00:20:30.641 START TEST raid_function_test_raid0 00:20:30.641 ************************************ 00:20:30.641 13:40:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:20:30.641 13:40:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:20:30.641 13:40:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:20:30.641 13:40:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:20:30.641 Process raid pid: 60421 00:20:30.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.641 13:40:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60421 00:20:30.641 13:40:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60421' 00:20:30.641 13:40:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60421 00:20:30.641 13:40:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60421 ']' 00:20:30.641 13:40:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.641 13:40:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:30.641 13:40:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.641 13:40:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.641 13:40:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.641 13:40:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:20:30.900 [2024-11-20 13:40:33.597865] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:20:30.900 [2024-11-20 13:40:33.599073] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.900 [2024-11-20 13:40:33.790236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.158 [2024-11-20 13:40:33.925750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.417 [2024-11-20 13:40:34.132024] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:31.417 [2024-11-20 13:40:34.132283] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:31.677 13:40:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:31.677 13:40:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:20:31.677 13:40:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:20:31.677 13:40:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.677 13:40:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:20:31.677 Base_1 00:20:31.677 13:40:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.677 13:40:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:20:31.677 13:40:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.677 13:40:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:20:31.954 Base_2 00:20:31.954 13:40:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.954 13:40:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:20:31.954 13:40:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.954 13:40:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:20:31.954 [2024-11-20 13:40:34.612526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:20:31.954 [2024-11-20 13:40:34.614990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:20:31.954 [2024-11-20 13:40:34.615245] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:31.954 [2024-11-20 13:40:34.615275] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:31.954 [2024-11-20 13:40:34.615627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:31.954 [2024-11-20 13:40:34.615829] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:31.955 [2024-11-20 13:40:34.615845] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:20:31.955 [2024-11-20 13:40:34.616048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:31.955 13:40:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.955 13:40:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:31.955 13:40:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:20:31.955 13:40:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.955 13:40:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:20:31.955 13:40:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.955 13:40:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:20:31.955 13:40:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:20:31.955 13:40:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:20:31.955 13:40:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:31.955 13:40:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:20:31.955 13:40:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:31.955 13:40:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:31.955 13:40:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:31.955 13:40:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:20:31.955 13:40:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:31.955 13:40:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:31.955 13:40:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:20:32.213 [2024-11-20 13:40:35.012758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:32.213 /dev/nbd0 00:20:32.213 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:32.213 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:32.213 13:40:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:32.213 13:40:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:20:32.213 13:40:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:32.213 13:40:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:32.213 13:40:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:32.213 13:40:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:20:32.213 13:40:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:32.213 13:40:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:32.213 13:40:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:32.213 1+0 records in 00:20:32.213 1+0 records out 00:20:32.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475555 s, 8.6 MB/s 00:20:32.213 13:40:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:32.213 13:40:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:20:32.213 13:40:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:32.213 13:40:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:32.213 13:40:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:20:32.213 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:32.213 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:32.213 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:20:32.213 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:20:32.213 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:32.781 { 00:20:32.781 "nbd_device": "/dev/nbd0", 00:20:32.781 "bdev_name": "raid" 00:20:32.781 } 00:20:32.781 ]' 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:32.781 { 00:20:32.781 "nbd_device": "/dev/nbd0", 00:20:32.781 "bdev_name": "raid" 00:20:32.781 } 00:20:32.781 ]' 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:20:32.781 4096+0 records in 00:20:32.781 4096+0 records out 00:20:32.781 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0322994 s, 64.9 MB/s 00:20:32.781 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:20:33.041 4096+0 records in 00:20:33.041 4096+0 records out 00:20:33.041 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.383142 s, 5.5 MB/s 00:20:33.041 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:20:33.041 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:20:33.041 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:20:33.041 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:20:33.041 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:20:33.041 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:20:33.041 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:20:33.041 128+0 records in 00:20:33.041 128+0 records out 00:20:33.041 65536 bytes (66 kB, 64 KiB) copied, 0.00062288 s, 105 MB/s 00:20:33.041 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:20:33.041 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:20:33.041 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:20:33.041 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:20:33.041 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:20:33.041 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:20:33.041 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:20:33.041 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:20:33.300 2035+0 records in 00:20:33.300 2035+0 records out 00:20:33.300 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0107351 s, 97.1 MB/s 00:20:33.300 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:20:33.300 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:20:33.300 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:20:33.300 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:20:33.300 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:20:33.300 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:20:33.300 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:20:33.300 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:20:33.300 456+0 records in 00:20:33.300 456+0 records out 00:20:33.300 233472 bytes (233 kB, 228 KiB) copied, 0.00200578 s, 116 MB/s 00:20:33.300 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:20:33.300 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:20:33.300 13:40:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:20:33.300 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:20:33.300 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:20:33.300 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:20:33.300 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:33.300 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:33.300 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:33.300 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:33.300 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:20:33.300 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:33.300 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:33.559 [2024-11-20 13:40:36.282383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:33.559 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:33.559 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:33.559 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:33.559 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:33.559 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:33.559 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:33.559 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:20:33.559 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:20:33.559 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:20:33.559 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:20:33.559 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:20:33.817 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:33.817 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:33.817 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:33.817 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:33.817 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:20:33.817 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:33.818 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:20:33.818 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:20:33.818 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:20:33.818 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:20:33.818 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:20:33.818 13:40:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60421 00:20:33.818 13:40:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60421 ']' 00:20:33.818 13:40:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60421 00:20:33.818 13:40:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:20:33.818 13:40:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.818 13:40:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60421 00:20:33.818 13:40:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:33.818 13:40:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:33.818 killing process with pid 60421 00:20:33.818 13:40:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60421' 00:20:33.818 13:40:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60421 00:20:33.818 [2024-11-20 13:40:36.729077] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:33.818 13:40:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60421 00:20:33.818 [2024-11-20 13:40:36.729199] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:33.818 [2024-11-20 13:40:36.729265] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:33.818 [2024-11-20 13:40:36.729287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:20:34.076 [2024-11-20 13:40:36.911743] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:35.451 13:40:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:20:35.451 00:20:35.451 real 0m4.466s 00:20:35.451 user 0m5.484s 00:20:35.451 sys 0m1.085s 00:20:35.451 13:40:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:35.451 ************************************ 00:20:35.451 END TEST raid_function_test_raid0 00:20:35.451 13:40:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:20:35.451 ************************************ 00:20:35.451 13:40:37 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:20:35.451 13:40:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:35.451 13:40:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:35.451 13:40:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:35.451 ************************************ 00:20:35.451 START TEST raid_function_test_concat 00:20:35.451 ************************************ 00:20:35.451 13:40:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:20:35.452 13:40:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:20:35.452 13:40:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:20:35.452 13:40:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:20:35.452 13:40:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60557 00:20:35.452 Process raid pid: 60557 00:20:35.452 13:40:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60557' 00:20:35.452 13:40:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:35.452 13:40:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60557 00:20:35.452 13:40:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60557 ']' 00:20:35.452 13:40:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.452 13:40:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.452 13:40:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.452 13:40:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.452 13:40:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:20:35.452 [2024-11-20 13:40:38.131740] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:20:35.452 [2024-11-20 13:40:38.131957] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.452 [2024-11-20 13:40:38.320376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.710 [2024-11-20 13:40:38.444403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.969 [2024-11-20 13:40:38.655130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:35.969 [2024-11-20 13:40:38.655199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:36.228 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.228 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:20:36.228 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:20:36.228 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.228 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:20:36.502 Base_1 00:20:36.502 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.502 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:20:36.502 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.502 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:20:36.502 Base_2 00:20:36.502 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.502 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:20:36.502 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.502 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:20:36.502 [2024-11-20 13:40:39.232803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:20:36.502 [2024-11-20 13:40:39.235405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:20:36.502 [2024-11-20 13:40:39.235492] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:36.502 [2024-11-20 13:40:39.235511] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:36.502 [2024-11-20 13:40:39.235785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:36.502 [2024-11-20 13:40:39.236033] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:36.503 [2024-11-20 13:40:39.236051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:20:36.503 [2024-11-20 13:40:39.236213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:36.503 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.503 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:36.503 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:20:36.503 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.503 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:20:36.503 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.503 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:20:36.503 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:20:36.503 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:20:36.503 13:40:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:36.503 13:40:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:20:36.503 13:40:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:36.503 13:40:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:36.503 13:40:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:36.503 13:40:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:20:36.503 13:40:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:36.503 13:40:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:36.503 13:40:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:20:36.764 [2024-11-20 13:40:39.533042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:36.764 /dev/nbd0 00:20:36.764 13:40:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:36.764 13:40:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:36.764 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:36.764 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:20:36.764 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:36.764 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:36.764 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:36.764 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:20:36.764 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:36.764 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:36.764 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:36.764 1+0 records in 00:20:36.764 1+0 records out 00:20:36.764 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362458 s, 11.3 MB/s 00:20:36.764 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:36.764 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:20:36.764 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:36.764 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:36.764 13:40:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:20:36.764 13:40:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:36.764 13:40:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:36.764 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:20:36.764 13:40:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:20:36.764 13:40:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:20:37.023 13:40:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:37.023 { 00:20:37.023 "nbd_device": "/dev/nbd0", 00:20:37.023 "bdev_name": "raid" 00:20:37.023 } 00:20:37.023 ]' 00:20:37.023 13:40:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:37.023 { 00:20:37.023 "nbd_device": "/dev/nbd0", 00:20:37.023 "bdev_name": "raid" 00:20:37.023 } 00:20:37.023 ]' 00:20:37.023 13:40:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:20:37.281 13:40:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:20:37.281 4096+0 records in 00:20:37.281 4096+0 records out 00:20:37.281 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0307582 s, 68.2 MB/s 00:20:37.281 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:20:37.540 4096+0 records in 00:20:37.540 4096+0 records out 00:20:37.540 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.330356 s, 6.3 MB/s 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:20:37.540 128+0 records in 00:20:37.540 128+0 records out 00:20:37.540 65536 bytes (66 kB, 64 KiB) copied, 0.00113237 s, 57.9 MB/s 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:20:37.540 2035+0 records in 00:20:37.540 2035+0 records out 00:20:37.540 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0130697 s, 79.7 MB/s 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:20:37.540 456+0 records in 00:20:37.540 456+0 records out 00:20:37.540 233472 bytes (233 kB, 228 KiB) copied, 0.00320024 s, 73.0 MB/s 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:20:37.540 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:20:37.798 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:20:37.798 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:20:37.798 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:20:37.798 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:37.798 13:40:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:37.798 13:40:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:37.798 13:40:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:37.798 13:40:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:20:37.798 13:40:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:37.798 13:40:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:38.057 [2024-11-20 13:40:40.731860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:38.057 13:40:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:38.057 13:40:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:38.057 13:40:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:38.057 13:40:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:38.057 13:40:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:38.057 13:40:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:38.057 13:40:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:20:38.057 13:40:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:20:38.057 13:40:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:20:38.057 13:40:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:20:38.057 13:40:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:20:38.316 13:40:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:38.316 13:40:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:38.316 13:40:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:38.316 13:40:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:38.316 13:40:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:20:38.316 13:40:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:38.316 13:40:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:20:38.316 13:40:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:20:38.316 13:40:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:20:38.316 13:40:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:20:38.316 13:40:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:20:38.316 13:40:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60557 00:20:38.316 13:40:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60557 ']' 00:20:38.316 13:40:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60557 00:20:38.316 13:40:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:20:38.316 13:40:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.316 13:40:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60557 00:20:38.316 killing process with pid 60557 00:20:38.316 13:40:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:38.316 13:40:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:38.316 13:40:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60557' 00:20:38.316 13:40:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60557 00:20:38.316 [2024-11-20 13:40:41.204857] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:38.316 13:40:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60557 00:20:38.316 [2024-11-20 13:40:41.205013] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:38.316 [2024-11-20 13:40:41.205087] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:38.316 [2024-11-20 13:40:41.205107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:20:38.574 [2024-11-20 13:40:41.383474] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:39.509 13:40:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:20:39.509 00:20:39.509 real 0m4.397s 00:20:39.509 user 0m5.441s 00:20:39.509 sys 0m1.054s 00:20:39.509 13:40:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:39.509 13:40:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:20:39.509 ************************************ 00:20:39.509 END TEST raid_function_test_concat 00:20:39.509 ************************************ 00:20:39.768 13:40:42 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:20:39.768 13:40:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:39.768 13:40:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:39.768 13:40:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:39.768 ************************************ 00:20:39.768 START TEST raid0_resize_test 00:20:39.768 ************************************ 00:20:39.768 13:40:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:20:39.768 13:40:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:20:39.768 13:40:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:20:39.768 13:40:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:20:39.768 13:40:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:20:39.768 13:40:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:20:39.768 13:40:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:20:39.768 13:40:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:20:39.768 13:40:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:20:39.768 13:40:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60686 00:20:39.768 Process raid pid: 60686 00:20:39.768 13:40:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60686' 00:20:39.768 13:40:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:39.768 13:40:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60686 00:20:39.768 13:40:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60686 ']' 00:20:39.768 13:40:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.768 13:40:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.768 13:40:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.768 13:40:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.768 13:40:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.768 [2024-11-20 13:40:42.581293] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:20:39.768 [2024-11-20 13:40:42.581449] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.027 [2024-11-20 13:40:42.764251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.027 [2024-11-20 13:40:42.924566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.285 [2024-11-20 13:40:43.150641] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:40.285 [2024-11-20 13:40:43.150699] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.851 Base_1 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.851 Base_2 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.851 [2024-11-20 13:40:43.613941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:20:40.851 [2024-11-20 13:40:43.616392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:20:40.851 [2024-11-20 13:40:43.616487] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:40.851 [2024-11-20 13:40:43.616507] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:40.851 [2024-11-20 13:40:43.616816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:20:40.851 [2024-11-20 13:40:43.617003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:40.851 [2024-11-20 13:40:43.617019] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:20:40.851 [2024-11-20 13:40:43.617187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.851 [2024-11-20 13:40:43.621903] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:20:40.851 [2024-11-20 13:40:43.621980] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:20:40.851 true 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.851 [2024-11-20 13:40:43.634161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:20:40.851 13:40:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:20:40.852 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.852 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.852 [2024-11-20 13:40:43.681942] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:20:40.852 [2024-11-20 13:40:43.681975] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:20:40.852 [2024-11-20 13:40:43.682011] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:20:40.852 true 00:20:40.852 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.852 13:40:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:20:40.852 13:40:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:20:40.852 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.852 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.852 [2024-11-20 13:40:43.694186] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:40.852 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.852 13:40:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:20:40.852 13:40:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:20:40.852 13:40:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:20:40.852 13:40:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:20:40.852 13:40:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:20:40.852 13:40:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60686 00:20:40.852 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60686 ']' 00:20:40.852 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60686 00:20:40.852 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:20:40.852 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:40.852 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60686 00:20:41.110 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:41.110 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:41.110 killing process with pid 60686 00:20:41.110 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60686' 00:20:41.110 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60686 00:20:41.110 [2024-11-20 13:40:43.777186] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:41.110 13:40:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60686 00:20:41.110 [2024-11-20 13:40:43.777298] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:41.110 [2024-11-20 13:40:43.777366] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:41.110 [2024-11-20 13:40:43.777382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:20:41.110 [2024-11-20 13:40:43.793099] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:42.045 13:40:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:20:42.045 00:20:42.045 real 0m2.412s 00:20:42.045 user 0m2.696s 00:20:42.045 sys 0m0.394s 00:20:42.045 13:40:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.045 13:40:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.045 ************************************ 00:20:42.045 END TEST raid0_resize_test 00:20:42.045 ************************************ 00:20:42.045 13:40:44 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:20:42.045 13:40:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:42.045 13:40:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.045 13:40:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:42.045 ************************************ 00:20:42.045 START TEST raid1_resize_test 00:20:42.045 ************************************ 00:20:42.045 13:40:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:20:42.045 13:40:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:20:42.045 13:40:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:20:42.045 13:40:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:20:42.045 13:40:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:20:42.045 13:40:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:20:42.045 13:40:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:20:42.045 13:40:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:20:42.045 13:40:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:20:42.045 13:40:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60742 00:20:42.045 Process raid pid: 60742 00:20:42.045 13:40:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60742' 00:20:42.045 13:40:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60742 00:20:42.045 13:40:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60742 ']' 00:20:42.045 13:40:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.045 13:40:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.045 13:40:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:42.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.045 13:40:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.045 13:40:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.045 13:40:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.304 [2024-11-20 13:40:45.036615] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:20:42.304 [2024-11-20 13:40:45.036811] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.564 [2024-11-20 13:40:45.229633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.564 [2024-11-20 13:40:45.395097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.823 [2024-11-20 13:40:45.627748] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:42.823 [2024-11-20 13:40:45.627813] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:43.392 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.392 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:20:43.392 13:40:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:20:43.392 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.392 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.392 Base_1 00:20:43.392 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.392 13:40:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:20:43.392 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.392 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.392 Base_2 00:20:43.392 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.392 13:40:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:20:43.392 13:40:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:20:43.392 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.392 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.392 [2024-11-20 13:40:46.106824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:20:43.392 [2024-11-20 13:40:46.109464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:20:43.392 [2024-11-20 13:40:46.109551] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:43.392 [2024-11-20 13:40:46.109573] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:43.392 [2024-11-20 13:40:46.109922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:20:43.392 [2024-11-20 13:40:46.110099] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:43.393 [2024-11-20 13:40:46.110124] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:20:43.393 [2024-11-20 13:40:46.110311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.393 [2024-11-20 13:40:46.114816] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:20:43.393 [2024-11-20 13:40:46.114859] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:20:43.393 true 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.393 [2024-11-20 13:40:46.127084] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.393 [2024-11-20 13:40:46.178864] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:20:43.393 [2024-11-20 13:40:46.178951] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:20:43.393 [2024-11-20 13:40:46.178999] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:20:43.393 true 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:20:43.393 [2024-11-20 13:40:46.191089] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60742 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60742 ']' 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60742 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60742 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:43.393 killing process with pid 60742 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60742' 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60742 00:20:43.393 [2024-11-20 13:40:46.280048] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:43.393 13:40:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60742 00:20:43.393 [2024-11-20 13:40:46.280163] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:43.393 [2024-11-20 13:40:46.280812] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:43.393 [2024-11-20 13:40:46.280847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:20:43.393 [2024-11-20 13:40:46.296801] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:44.766 13:40:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:20:44.766 00:20:44.766 real 0m2.465s 00:20:44.766 user 0m2.768s 00:20:44.766 sys 0m0.411s 00:20:44.766 13:40:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:44.766 13:40:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.766 ************************************ 00:20:44.766 END TEST raid1_resize_test 00:20:44.766 ************************************ 00:20:44.766 13:40:47 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:20:44.766 13:40:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:20:44.766 13:40:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:20:44.766 13:40:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:44.766 13:40:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:44.766 13:40:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:44.766 ************************************ 00:20:44.766 START TEST raid_state_function_test 00:20:44.766 ************************************ 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:20:44.766 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60810 00:20:44.767 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:44.767 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60810' 00:20:44.767 Process raid pid: 60810 00:20:44.767 13:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60810 00:20:44.767 13:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60810 ']' 00:20:44.767 13:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.767 13:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.767 13:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.767 13:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.767 13:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.767 [2024-11-20 13:40:47.576951] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:20:44.767 [2024-11-20 13:40:47.577133] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.024 [2024-11-20 13:40:47.764114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.024 [2024-11-20 13:40:47.896529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.282 [2024-11-20 13:40:48.101477] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:45.282 [2024-11-20 13:40:48.101517] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.848 [2024-11-20 13:40:48.586169] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:45.848 [2024-11-20 13:40:48.586235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:45.848 [2024-11-20 13:40:48.586253] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:45.848 [2024-11-20 13:40:48.586270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.848 "name": "Existed_Raid", 00:20:45.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.848 "strip_size_kb": 64, 00:20:45.848 "state": "configuring", 00:20:45.848 "raid_level": "raid0", 00:20:45.848 "superblock": false, 00:20:45.848 "num_base_bdevs": 2, 00:20:45.848 "num_base_bdevs_discovered": 0, 00:20:45.848 "num_base_bdevs_operational": 2, 00:20:45.848 "base_bdevs_list": [ 00:20:45.848 { 00:20:45.848 "name": "BaseBdev1", 00:20:45.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.848 "is_configured": false, 00:20:45.848 "data_offset": 0, 00:20:45.848 "data_size": 0 00:20:45.848 }, 00:20:45.848 { 00:20:45.848 "name": "BaseBdev2", 00:20:45.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.848 "is_configured": false, 00:20:45.848 "data_offset": 0, 00:20:45.848 "data_size": 0 00:20:45.848 } 00:20:45.848 ] 00:20:45.848 }' 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.848 13:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.412 [2024-11-20 13:40:49.114256] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:46.412 [2024-11-20 13:40:49.114305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.412 [2024-11-20 13:40:49.122233] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:46.412 [2024-11-20 13:40:49.122288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:46.412 [2024-11-20 13:40:49.122304] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:46.412 [2024-11-20 13:40:49.122323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.412 [2024-11-20 13:40:49.168124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:46.412 BaseBdev1 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.412 [ 00:20:46.412 { 00:20:46.412 "name": "BaseBdev1", 00:20:46.412 "aliases": [ 00:20:46.412 "b4d5f33b-463f-42c2-94ec-64502c2732d8" 00:20:46.412 ], 00:20:46.412 "product_name": "Malloc disk", 00:20:46.412 "block_size": 512, 00:20:46.412 "num_blocks": 65536, 00:20:46.412 "uuid": "b4d5f33b-463f-42c2-94ec-64502c2732d8", 00:20:46.412 "assigned_rate_limits": { 00:20:46.412 "rw_ios_per_sec": 0, 00:20:46.412 "rw_mbytes_per_sec": 0, 00:20:46.412 "r_mbytes_per_sec": 0, 00:20:46.412 "w_mbytes_per_sec": 0 00:20:46.412 }, 00:20:46.412 "claimed": true, 00:20:46.412 "claim_type": "exclusive_write", 00:20:46.412 "zoned": false, 00:20:46.412 "supported_io_types": { 00:20:46.412 "read": true, 00:20:46.412 "write": true, 00:20:46.412 "unmap": true, 00:20:46.412 "flush": true, 00:20:46.412 "reset": true, 00:20:46.412 "nvme_admin": false, 00:20:46.412 "nvme_io": false, 00:20:46.412 "nvme_io_md": false, 00:20:46.412 "write_zeroes": true, 00:20:46.412 "zcopy": true, 00:20:46.412 "get_zone_info": false, 00:20:46.412 "zone_management": false, 00:20:46.412 "zone_append": false, 00:20:46.412 "compare": false, 00:20:46.412 "compare_and_write": false, 00:20:46.412 "abort": true, 00:20:46.412 "seek_hole": false, 00:20:46.412 "seek_data": false, 00:20:46.412 "copy": true, 00:20:46.412 "nvme_iov_md": false 00:20:46.412 }, 00:20:46.412 "memory_domains": [ 00:20:46.412 { 00:20:46.412 "dma_device_id": "system", 00:20:46.412 "dma_device_type": 1 00:20:46.412 }, 00:20:46.412 { 00:20:46.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.412 "dma_device_type": 2 00:20:46.412 } 00:20:46.412 ], 00:20:46.412 "driver_specific": {} 00:20:46.412 } 00:20:46.412 ] 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.412 "name": "Existed_Raid", 00:20:46.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.412 "strip_size_kb": 64, 00:20:46.412 "state": "configuring", 00:20:46.412 "raid_level": "raid0", 00:20:46.412 "superblock": false, 00:20:46.412 "num_base_bdevs": 2, 00:20:46.412 "num_base_bdevs_discovered": 1, 00:20:46.412 "num_base_bdevs_operational": 2, 00:20:46.412 "base_bdevs_list": [ 00:20:46.412 { 00:20:46.412 "name": "BaseBdev1", 00:20:46.412 "uuid": "b4d5f33b-463f-42c2-94ec-64502c2732d8", 00:20:46.412 "is_configured": true, 00:20:46.412 "data_offset": 0, 00:20:46.412 "data_size": 65536 00:20:46.412 }, 00:20:46.412 { 00:20:46.412 "name": "BaseBdev2", 00:20:46.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.412 "is_configured": false, 00:20:46.412 "data_offset": 0, 00:20:46.412 "data_size": 0 00:20:46.412 } 00:20:46.412 ] 00:20:46.412 }' 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.412 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.977 [2024-11-20 13:40:49.772339] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:46.977 [2024-11-20 13:40:49.772406] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.977 [2024-11-20 13:40:49.780369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:46.977 [2024-11-20 13:40:49.782862] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:46.977 [2024-11-20 13:40:49.782953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.977 "name": "Existed_Raid", 00:20:46.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.977 "strip_size_kb": 64, 00:20:46.977 "state": "configuring", 00:20:46.977 "raid_level": "raid0", 00:20:46.977 "superblock": false, 00:20:46.977 "num_base_bdevs": 2, 00:20:46.977 "num_base_bdevs_discovered": 1, 00:20:46.977 "num_base_bdevs_operational": 2, 00:20:46.977 "base_bdevs_list": [ 00:20:46.977 { 00:20:46.977 "name": "BaseBdev1", 00:20:46.977 "uuid": "b4d5f33b-463f-42c2-94ec-64502c2732d8", 00:20:46.977 "is_configured": true, 00:20:46.977 "data_offset": 0, 00:20:46.977 "data_size": 65536 00:20:46.977 }, 00:20:46.977 { 00:20:46.977 "name": "BaseBdev2", 00:20:46.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.977 "is_configured": false, 00:20:46.977 "data_offset": 0, 00:20:46.977 "data_size": 0 00:20:46.977 } 00:20:46.977 ] 00:20:46.977 }' 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.977 13:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.542 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.543 [2024-11-20 13:40:50.331402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:47.543 [2024-11-20 13:40:50.331463] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:47.543 [2024-11-20 13:40:50.331476] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:47.543 [2024-11-20 13:40:50.331847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:47.543 [2024-11-20 13:40:50.332113] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:47.543 [2024-11-20 13:40:50.332146] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:47.543 [2024-11-20 13:40:50.332466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.543 BaseBdev2 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.543 [ 00:20:47.543 { 00:20:47.543 "name": "BaseBdev2", 00:20:47.543 "aliases": [ 00:20:47.543 "4e81d515-f68d-4ff2-a98b-f44b98c383f2" 00:20:47.543 ], 00:20:47.543 "product_name": "Malloc disk", 00:20:47.543 "block_size": 512, 00:20:47.543 "num_blocks": 65536, 00:20:47.543 "uuid": "4e81d515-f68d-4ff2-a98b-f44b98c383f2", 00:20:47.543 "assigned_rate_limits": { 00:20:47.543 "rw_ios_per_sec": 0, 00:20:47.543 "rw_mbytes_per_sec": 0, 00:20:47.543 "r_mbytes_per_sec": 0, 00:20:47.543 "w_mbytes_per_sec": 0 00:20:47.543 }, 00:20:47.543 "claimed": true, 00:20:47.543 "claim_type": "exclusive_write", 00:20:47.543 "zoned": false, 00:20:47.543 "supported_io_types": { 00:20:47.543 "read": true, 00:20:47.543 "write": true, 00:20:47.543 "unmap": true, 00:20:47.543 "flush": true, 00:20:47.543 "reset": true, 00:20:47.543 "nvme_admin": false, 00:20:47.543 "nvme_io": false, 00:20:47.543 "nvme_io_md": false, 00:20:47.543 "write_zeroes": true, 00:20:47.543 "zcopy": true, 00:20:47.543 "get_zone_info": false, 00:20:47.543 "zone_management": false, 00:20:47.543 "zone_append": false, 00:20:47.543 "compare": false, 00:20:47.543 "compare_and_write": false, 00:20:47.543 "abort": true, 00:20:47.543 "seek_hole": false, 00:20:47.543 "seek_data": false, 00:20:47.543 "copy": true, 00:20:47.543 "nvme_iov_md": false 00:20:47.543 }, 00:20:47.543 "memory_domains": [ 00:20:47.543 { 00:20:47.543 "dma_device_id": "system", 00:20:47.543 "dma_device_type": 1 00:20:47.543 }, 00:20:47.543 { 00:20:47.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.543 "dma_device_type": 2 00:20:47.543 } 00:20:47.543 ], 00:20:47.543 "driver_specific": {} 00:20:47.543 } 00:20:47.543 ] 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.543 "name": "Existed_Raid", 00:20:47.543 "uuid": "0c761281-2a5a-42af-9d82-e4afd9ef4977", 00:20:47.543 "strip_size_kb": 64, 00:20:47.543 "state": "online", 00:20:47.543 "raid_level": "raid0", 00:20:47.543 "superblock": false, 00:20:47.543 "num_base_bdevs": 2, 00:20:47.543 "num_base_bdevs_discovered": 2, 00:20:47.543 "num_base_bdevs_operational": 2, 00:20:47.543 "base_bdevs_list": [ 00:20:47.543 { 00:20:47.543 "name": "BaseBdev1", 00:20:47.543 "uuid": "b4d5f33b-463f-42c2-94ec-64502c2732d8", 00:20:47.543 "is_configured": true, 00:20:47.543 "data_offset": 0, 00:20:47.543 "data_size": 65536 00:20:47.543 }, 00:20:47.543 { 00:20:47.543 "name": "BaseBdev2", 00:20:47.543 "uuid": "4e81d515-f68d-4ff2-a98b-f44b98c383f2", 00:20:47.543 "is_configured": true, 00:20:47.543 "data_offset": 0, 00:20:47.543 "data_size": 65536 00:20:47.543 } 00:20:47.543 ] 00:20:47.543 }' 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.543 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.108 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:48.108 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:48.108 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:48.108 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:48.108 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:48.108 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:48.108 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:48.108 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.108 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:48.108 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.108 [2024-11-20 13:40:50.851954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:48.108 13:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.108 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:48.108 "name": "Existed_Raid", 00:20:48.108 "aliases": [ 00:20:48.108 "0c761281-2a5a-42af-9d82-e4afd9ef4977" 00:20:48.108 ], 00:20:48.108 "product_name": "Raid Volume", 00:20:48.108 "block_size": 512, 00:20:48.108 "num_blocks": 131072, 00:20:48.108 "uuid": "0c761281-2a5a-42af-9d82-e4afd9ef4977", 00:20:48.108 "assigned_rate_limits": { 00:20:48.108 "rw_ios_per_sec": 0, 00:20:48.108 "rw_mbytes_per_sec": 0, 00:20:48.108 "r_mbytes_per_sec": 0, 00:20:48.108 "w_mbytes_per_sec": 0 00:20:48.108 }, 00:20:48.108 "claimed": false, 00:20:48.108 "zoned": false, 00:20:48.108 "supported_io_types": { 00:20:48.108 "read": true, 00:20:48.108 "write": true, 00:20:48.108 "unmap": true, 00:20:48.108 "flush": true, 00:20:48.108 "reset": true, 00:20:48.108 "nvme_admin": false, 00:20:48.108 "nvme_io": false, 00:20:48.108 "nvme_io_md": false, 00:20:48.108 "write_zeroes": true, 00:20:48.108 "zcopy": false, 00:20:48.108 "get_zone_info": false, 00:20:48.108 "zone_management": false, 00:20:48.108 "zone_append": false, 00:20:48.108 "compare": false, 00:20:48.108 "compare_and_write": false, 00:20:48.108 "abort": false, 00:20:48.108 "seek_hole": false, 00:20:48.108 "seek_data": false, 00:20:48.108 "copy": false, 00:20:48.108 "nvme_iov_md": false 00:20:48.108 }, 00:20:48.108 "memory_domains": [ 00:20:48.108 { 00:20:48.108 "dma_device_id": "system", 00:20:48.108 "dma_device_type": 1 00:20:48.108 }, 00:20:48.108 { 00:20:48.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.108 "dma_device_type": 2 00:20:48.108 }, 00:20:48.108 { 00:20:48.108 "dma_device_id": "system", 00:20:48.108 "dma_device_type": 1 00:20:48.108 }, 00:20:48.108 { 00:20:48.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.108 "dma_device_type": 2 00:20:48.108 } 00:20:48.108 ], 00:20:48.108 "driver_specific": { 00:20:48.108 "raid": { 00:20:48.108 "uuid": "0c761281-2a5a-42af-9d82-e4afd9ef4977", 00:20:48.108 "strip_size_kb": 64, 00:20:48.109 "state": "online", 00:20:48.109 "raid_level": "raid0", 00:20:48.109 "superblock": false, 00:20:48.109 "num_base_bdevs": 2, 00:20:48.109 "num_base_bdevs_discovered": 2, 00:20:48.109 "num_base_bdevs_operational": 2, 00:20:48.109 "base_bdevs_list": [ 00:20:48.109 { 00:20:48.109 "name": "BaseBdev1", 00:20:48.109 "uuid": "b4d5f33b-463f-42c2-94ec-64502c2732d8", 00:20:48.109 "is_configured": true, 00:20:48.109 "data_offset": 0, 00:20:48.109 "data_size": 65536 00:20:48.109 }, 00:20:48.109 { 00:20:48.109 "name": "BaseBdev2", 00:20:48.109 "uuid": "4e81d515-f68d-4ff2-a98b-f44b98c383f2", 00:20:48.109 "is_configured": true, 00:20:48.109 "data_offset": 0, 00:20:48.109 "data_size": 65536 00:20:48.109 } 00:20:48.109 ] 00:20:48.109 } 00:20:48.109 } 00:20:48.109 }' 00:20:48.109 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:48.109 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:48.109 BaseBdev2' 00:20:48.109 13:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:48.109 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:48.109 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:48.109 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:48.109 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:48.109 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.109 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.368 [2024-11-20 13:40:51.107749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:48.368 [2024-11-20 13:40:51.107815] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:48.368 [2024-11-20 13:40:51.107884] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.368 "name": "Existed_Raid", 00:20:48.368 "uuid": "0c761281-2a5a-42af-9d82-e4afd9ef4977", 00:20:48.368 "strip_size_kb": 64, 00:20:48.368 "state": "offline", 00:20:48.368 "raid_level": "raid0", 00:20:48.368 "superblock": false, 00:20:48.368 "num_base_bdevs": 2, 00:20:48.368 "num_base_bdevs_discovered": 1, 00:20:48.368 "num_base_bdevs_operational": 1, 00:20:48.368 "base_bdevs_list": [ 00:20:48.368 { 00:20:48.368 "name": null, 00:20:48.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.368 "is_configured": false, 00:20:48.368 "data_offset": 0, 00:20:48.368 "data_size": 65536 00:20:48.368 }, 00:20:48.368 { 00:20:48.368 "name": "BaseBdev2", 00:20:48.368 "uuid": "4e81d515-f68d-4ff2-a98b-f44b98c383f2", 00:20:48.368 "is_configured": true, 00:20:48.368 "data_offset": 0, 00:20:48.368 "data_size": 65536 00:20:48.368 } 00:20:48.368 ] 00:20:48.368 }' 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.368 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.944 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:48.944 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:48.944 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.944 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:48.944 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.944 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.944 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.944 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:48.944 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:48.944 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:48.944 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.944 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.944 [2024-11-20 13:40:51.744862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:48.944 [2024-11-20 13:40:51.744968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:48.944 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.944 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:48.944 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:48.945 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.945 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.945 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.945 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:48.945 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.207 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:49.207 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:49.207 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:49.207 13:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60810 00:20:49.207 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60810 ']' 00:20:49.207 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60810 00:20:49.207 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:20:49.207 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.207 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60810 00:20:49.207 killing process with pid 60810 00:20:49.207 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:49.207 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:49.207 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60810' 00:20:49.207 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60810 00:20:49.207 [2024-11-20 13:40:51.930727] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:49.207 13:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60810 00:20:49.207 [2024-11-20 13:40:51.946068] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:50.141 ************************************ 00:20:50.141 END TEST raid_state_function_test 00:20:50.141 ************************************ 00:20:50.141 13:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:20:50.141 00:20:50.141 real 0m5.542s 00:20:50.141 user 0m8.386s 00:20:50.141 sys 0m0.789s 00:20:50.141 13:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.141 13:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.141 13:40:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:20:50.141 13:40:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:50.141 13:40:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:50.141 13:40:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:50.141 ************************************ 00:20:50.141 START TEST raid_state_function_test_sb 00:20:50.141 ************************************ 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61063 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61063' 00:20:50.141 Process raid pid: 61063 00:20:50.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61063 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61063 ']' 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.141 13:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.142 13:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.142 13:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.142 13:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.400 [2024-11-20 13:40:53.152546] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:20:50.400 [2024-11-20 13:40:53.152732] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.659 [2024-11-20 13:40:53.343835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.659 [2024-11-20 13:40:53.501133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.917 [2024-11-20 13:40:53.704894] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:50.917 [2024-11-20 13:40:53.704962] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:51.484 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.484 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:51.484 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:51.484 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.484 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.484 [2024-11-20 13:40:54.128164] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:51.484 [2024-11-20 13:40:54.128232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:51.484 [2024-11-20 13:40:54.128250] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:51.484 [2024-11-20 13:40:54.128267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:51.484 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.484 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:20:51.484 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:51.484 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:51.484 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:51.484 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:51.484 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:51.484 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:51.484 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:51.484 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:51.484 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:51.484 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.484 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.484 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:51.485 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.485 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.485 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:51.485 "name": "Existed_Raid", 00:20:51.485 "uuid": "a6d2c90d-e829-4fda-9459-dec48d961e18", 00:20:51.485 "strip_size_kb": 64, 00:20:51.485 "state": "configuring", 00:20:51.485 "raid_level": "raid0", 00:20:51.485 "superblock": true, 00:20:51.485 "num_base_bdevs": 2, 00:20:51.485 "num_base_bdevs_discovered": 0, 00:20:51.485 "num_base_bdevs_operational": 2, 00:20:51.485 "base_bdevs_list": [ 00:20:51.485 { 00:20:51.485 "name": "BaseBdev1", 00:20:51.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.485 "is_configured": false, 00:20:51.485 "data_offset": 0, 00:20:51.485 "data_size": 0 00:20:51.485 }, 00:20:51.485 { 00:20:51.485 "name": "BaseBdev2", 00:20:51.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.485 "is_configured": false, 00:20:51.485 "data_offset": 0, 00:20:51.485 "data_size": 0 00:20:51.485 } 00:20:51.485 ] 00:20:51.485 }' 00:20:51.485 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:51.485 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.743 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:51.743 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.743 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.743 [2024-11-20 13:40:54.648233] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:51.743 [2024-11-20 13:40:54.648308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:51.743 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.743 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:51.743 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.743 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.743 [2024-11-20 13:40:54.656241] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:51.743 [2024-11-20 13:40:54.656331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:51.743 [2024-11-20 13:40:54.656347] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:51.743 [2024-11-20 13:40:54.656367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:52.001 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.001 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:52.001 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.002 [2024-11-20 13:40:54.702101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:52.002 BaseBdev1 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.002 [ 00:20:52.002 { 00:20:52.002 "name": "BaseBdev1", 00:20:52.002 "aliases": [ 00:20:52.002 "83fa6373-8025-4311-b791-0481db171a74" 00:20:52.002 ], 00:20:52.002 "product_name": "Malloc disk", 00:20:52.002 "block_size": 512, 00:20:52.002 "num_blocks": 65536, 00:20:52.002 "uuid": "83fa6373-8025-4311-b791-0481db171a74", 00:20:52.002 "assigned_rate_limits": { 00:20:52.002 "rw_ios_per_sec": 0, 00:20:52.002 "rw_mbytes_per_sec": 0, 00:20:52.002 "r_mbytes_per_sec": 0, 00:20:52.002 "w_mbytes_per_sec": 0 00:20:52.002 }, 00:20:52.002 "claimed": true, 00:20:52.002 "claim_type": "exclusive_write", 00:20:52.002 "zoned": false, 00:20:52.002 "supported_io_types": { 00:20:52.002 "read": true, 00:20:52.002 "write": true, 00:20:52.002 "unmap": true, 00:20:52.002 "flush": true, 00:20:52.002 "reset": true, 00:20:52.002 "nvme_admin": false, 00:20:52.002 "nvme_io": false, 00:20:52.002 "nvme_io_md": false, 00:20:52.002 "write_zeroes": true, 00:20:52.002 "zcopy": true, 00:20:52.002 "get_zone_info": false, 00:20:52.002 "zone_management": false, 00:20:52.002 "zone_append": false, 00:20:52.002 "compare": false, 00:20:52.002 "compare_and_write": false, 00:20:52.002 "abort": true, 00:20:52.002 "seek_hole": false, 00:20:52.002 "seek_data": false, 00:20:52.002 "copy": true, 00:20:52.002 "nvme_iov_md": false 00:20:52.002 }, 00:20:52.002 "memory_domains": [ 00:20:52.002 { 00:20:52.002 "dma_device_id": "system", 00:20:52.002 "dma_device_type": 1 00:20:52.002 }, 00:20:52.002 { 00:20:52.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:52.002 "dma_device_type": 2 00:20:52.002 } 00:20:52.002 ], 00:20:52.002 "driver_specific": {} 00:20:52.002 } 00:20:52.002 ] 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:52.002 "name": "Existed_Raid", 00:20:52.002 "uuid": "bc06f6a1-18a1-4fdb-a28a-d99d151fd3ba", 00:20:52.002 "strip_size_kb": 64, 00:20:52.002 "state": "configuring", 00:20:52.002 "raid_level": "raid0", 00:20:52.002 "superblock": true, 00:20:52.002 "num_base_bdevs": 2, 00:20:52.002 "num_base_bdevs_discovered": 1, 00:20:52.002 "num_base_bdevs_operational": 2, 00:20:52.002 "base_bdevs_list": [ 00:20:52.002 { 00:20:52.002 "name": "BaseBdev1", 00:20:52.002 "uuid": "83fa6373-8025-4311-b791-0481db171a74", 00:20:52.002 "is_configured": true, 00:20:52.002 "data_offset": 2048, 00:20:52.002 "data_size": 63488 00:20:52.002 }, 00:20:52.002 { 00:20:52.002 "name": "BaseBdev2", 00:20:52.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.002 "is_configured": false, 00:20:52.002 "data_offset": 0, 00:20:52.002 "data_size": 0 00:20:52.002 } 00:20:52.002 ] 00:20:52.002 }' 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:52.002 13:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.570 [2024-11-20 13:40:55.250328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:52.570 [2024-11-20 13:40:55.250410] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.570 [2024-11-20 13:40:55.258368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:52.570 [2024-11-20 13:40:55.260959] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:52.570 [2024-11-20 13:40:55.261141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:52.570 "name": "Existed_Raid", 00:20:52.570 "uuid": "0d586ab7-1b6c-4d28-b6ca-9ebf75ddd0fc", 00:20:52.570 "strip_size_kb": 64, 00:20:52.570 "state": "configuring", 00:20:52.570 "raid_level": "raid0", 00:20:52.570 "superblock": true, 00:20:52.570 "num_base_bdevs": 2, 00:20:52.570 "num_base_bdevs_discovered": 1, 00:20:52.570 "num_base_bdevs_operational": 2, 00:20:52.570 "base_bdevs_list": [ 00:20:52.570 { 00:20:52.570 "name": "BaseBdev1", 00:20:52.570 "uuid": "83fa6373-8025-4311-b791-0481db171a74", 00:20:52.570 "is_configured": true, 00:20:52.570 "data_offset": 2048, 00:20:52.570 "data_size": 63488 00:20:52.570 }, 00:20:52.570 { 00:20:52.570 "name": "BaseBdev2", 00:20:52.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.570 "is_configured": false, 00:20:52.570 "data_offset": 0, 00:20:52.570 "data_size": 0 00:20:52.570 } 00:20:52.570 ] 00:20:52.570 }' 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:52.570 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.138 [2024-11-20 13:40:55.805130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:53.138 [2024-11-20 13:40:55.805466] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:53.138 [2024-11-20 13:40:55.805486] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:53.138 [2024-11-20 13:40:55.805817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:53.138 BaseBdev2 00:20:53.138 [2024-11-20 13:40:55.806060] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:53.138 [2024-11-20 13:40:55.806092] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:53.138 [2024-11-20 13:40:55.806264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.138 [ 00:20:53.138 { 00:20:53.138 "name": "BaseBdev2", 00:20:53.138 "aliases": [ 00:20:53.138 "8b059173-037d-495d-9acd-a2775bb7ae97" 00:20:53.138 ], 00:20:53.138 "product_name": "Malloc disk", 00:20:53.138 "block_size": 512, 00:20:53.138 "num_blocks": 65536, 00:20:53.138 "uuid": "8b059173-037d-495d-9acd-a2775bb7ae97", 00:20:53.138 "assigned_rate_limits": { 00:20:53.138 "rw_ios_per_sec": 0, 00:20:53.138 "rw_mbytes_per_sec": 0, 00:20:53.138 "r_mbytes_per_sec": 0, 00:20:53.138 "w_mbytes_per_sec": 0 00:20:53.138 }, 00:20:53.138 "claimed": true, 00:20:53.138 "claim_type": "exclusive_write", 00:20:53.138 "zoned": false, 00:20:53.138 "supported_io_types": { 00:20:53.138 "read": true, 00:20:53.138 "write": true, 00:20:53.138 "unmap": true, 00:20:53.138 "flush": true, 00:20:53.138 "reset": true, 00:20:53.138 "nvme_admin": false, 00:20:53.138 "nvme_io": false, 00:20:53.138 "nvme_io_md": false, 00:20:53.138 "write_zeroes": true, 00:20:53.138 "zcopy": true, 00:20:53.138 "get_zone_info": false, 00:20:53.138 "zone_management": false, 00:20:53.138 "zone_append": false, 00:20:53.138 "compare": false, 00:20:53.138 "compare_and_write": false, 00:20:53.138 "abort": true, 00:20:53.138 "seek_hole": false, 00:20:53.138 "seek_data": false, 00:20:53.138 "copy": true, 00:20:53.138 "nvme_iov_md": false 00:20:53.138 }, 00:20:53.138 "memory_domains": [ 00:20:53.138 { 00:20:53.138 "dma_device_id": "system", 00:20:53.138 "dma_device_type": 1 00:20:53.138 }, 00:20:53.138 { 00:20:53.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:53.138 "dma_device_type": 2 00:20:53.138 } 00:20:53.138 ], 00:20:53.138 "driver_specific": {} 00:20:53.138 } 00:20:53.138 ] 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:53.138 "name": "Existed_Raid", 00:20:53.138 "uuid": "0d586ab7-1b6c-4d28-b6ca-9ebf75ddd0fc", 00:20:53.138 "strip_size_kb": 64, 00:20:53.138 "state": "online", 00:20:53.138 "raid_level": "raid0", 00:20:53.138 "superblock": true, 00:20:53.138 "num_base_bdevs": 2, 00:20:53.138 "num_base_bdevs_discovered": 2, 00:20:53.138 "num_base_bdevs_operational": 2, 00:20:53.138 "base_bdevs_list": [ 00:20:53.138 { 00:20:53.138 "name": "BaseBdev1", 00:20:53.138 "uuid": "83fa6373-8025-4311-b791-0481db171a74", 00:20:53.138 "is_configured": true, 00:20:53.138 "data_offset": 2048, 00:20:53.138 "data_size": 63488 00:20:53.138 }, 00:20:53.138 { 00:20:53.138 "name": "BaseBdev2", 00:20:53.138 "uuid": "8b059173-037d-495d-9acd-a2775bb7ae97", 00:20:53.138 "is_configured": true, 00:20:53.138 "data_offset": 2048, 00:20:53.138 "data_size": 63488 00:20:53.138 } 00:20:53.138 ] 00:20:53.138 }' 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:53.138 13:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:53.706 [2024-11-20 13:40:56.369693] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:53.706 "name": "Existed_Raid", 00:20:53.706 "aliases": [ 00:20:53.706 "0d586ab7-1b6c-4d28-b6ca-9ebf75ddd0fc" 00:20:53.706 ], 00:20:53.706 "product_name": "Raid Volume", 00:20:53.706 "block_size": 512, 00:20:53.706 "num_blocks": 126976, 00:20:53.706 "uuid": "0d586ab7-1b6c-4d28-b6ca-9ebf75ddd0fc", 00:20:53.706 "assigned_rate_limits": { 00:20:53.706 "rw_ios_per_sec": 0, 00:20:53.706 "rw_mbytes_per_sec": 0, 00:20:53.706 "r_mbytes_per_sec": 0, 00:20:53.706 "w_mbytes_per_sec": 0 00:20:53.706 }, 00:20:53.706 "claimed": false, 00:20:53.706 "zoned": false, 00:20:53.706 "supported_io_types": { 00:20:53.706 "read": true, 00:20:53.706 "write": true, 00:20:53.706 "unmap": true, 00:20:53.706 "flush": true, 00:20:53.706 "reset": true, 00:20:53.706 "nvme_admin": false, 00:20:53.706 "nvme_io": false, 00:20:53.706 "nvme_io_md": false, 00:20:53.706 "write_zeroes": true, 00:20:53.706 "zcopy": false, 00:20:53.706 "get_zone_info": false, 00:20:53.706 "zone_management": false, 00:20:53.706 "zone_append": false, 00:20:53.706 "compare": false, 00:20:53.706 "compare_and_write": false, 00:20:53.706 "abort": false, 00:20:53.706 "seek_hole": false, 00:20:53.706 "seek_data": false, 00:20:53.706 "copy": false, 00:20:53.706 "nvme_iov_md": false 00:20:53.706 }, 00:20:53.706 "memory_domains": [ 00:20:53.706 { 00:20:53.706 "dma_device_id": "system", 00:20:53.706 "dma_device_type": 1 00:20:53.706 }, 00:20:53.706 { 00:20:53.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:53.706 "dma_device_type": 2 00:20:53.706 }, 00:20:53.706 { 00:20:53.706 "dma_device_id": "system", 00:20:53.706 "dma_device_type": 1 00:20:53.706 }, 00:20:53.706 { 00:20:53.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:53.706 "dma_device_type": 2 00:20:53.706 } 00:20:53.706 ], 00:20:53.706 "driver_specific": { 00:20:53.706 "raid": { 00:20:53.706 "uuid": "0d586ab7-1b6c-4d28-b6ca-9ebf75ddd0fc", 00:20:53.706 "strip_size_kb": 64, 00:20:53.706 "state": "online", 00:20:53.706 "raid_level": "raid0", 00:20:53.706 "superblock": true, 00:20:53.706 "num_base_bdevs": 2, 00:20:53.706 "num_base_bdevs_discovered": 2, 00:20:53.706 "num_base_bdevs_operational": 2, 00:20:53.706 "base_bdevs_list": [ 00:20:53.706 { 00:20:53.706 "name": "BaseBdev1", 00:20:53.706 "uuid": "83fa6373-8025-4311-b791-0481db171a74", 00:20:53.706 "is_configured": true, 00:20:53.706 "data_offset": 2048, 00:20:53.706 "data_size": 63488 00:20:53.706 }, 00:20:53.706 { 00:20:53.706 "name": "BaseBdev2", 00:20:53.706 "uuid": "8b059173-037d-495d-9acd-a2775bb7ae97", 00:20:53.706 "is_configured": true, 00:20:53.706 "data_offset": 2048, 00:20:53.706 "data_size": 63488 00:20:53.706 } 00:20:53.706 ] 00:20:53.706 } 00:20:53.706 } 00:20:53.706 }' 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:53.706 BaseBdev2' 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:53.706 13:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.966 [2024-11-20 13:40:56.629495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:53.966 [2024-11-20 13:40:56.629546] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:53.966 [2024-11-20 13:40:56.629616] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:53.966 "name": "Existed_Raid", 00:20:53.966 "uuid": "0d586ab7-1b6c-4d28-b6ca-9ebf75ddd0fc", 00:20:53.966 "strip_size_kb": 64, 00:20:53.966 "state": "offline", 00:20:53.966 "raid_level": "raid0", 00:20:53.966 "superblock": true, 00:20:53.966 "num_base_bdevs": 2, 00:20:53.966 "num_base_bdevs_discovered": 1, 00:20:53.966 "num_base_bdevs_operational": 1, 00:20:53.966 "base_bdevs_list": [ 00:20:53.966 { 00:20:53.966 "name": null, 00:20:53.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.966 "is_configured": false, 00:20:53.966 "data_offset": 0, 00:20:53.966 "data_size": 63488 00:20:53.966 }, 00:20:53.966 { 00:20:53.966 "name": "BaseBdev2", 00:20:53.966 "uuid": "8b059173-037d-495d-9acd-a2775bb7ae97", 00:20:53.966 "is_configured": true, 00:20:53.966 "data_offset": 2048, 00:20:53.966 "data_size": 63488 00:20:53.966 } 00:20:53.966 ] 00:20:53.966 }' 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:53.966 13:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.594 [2024-11-20 13:40:57.287277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:54.594 [2024-11-20 13:40:57.287354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61063 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61063 ']' 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61063 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61063 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:54.594 killing process with pid 61063 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61063' 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61063 00:20:54.594 [2024-11-20 13:40:57.469390] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:54.594 13:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61063 00:20:54.594 [2024-11-20 13:40:57.485326] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:55.988 ************************************ 00:20:55.988 END TEST raid_state_function_test_sb 00:20:55.988 ************************************ 00:20:55.988 13:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:20:55.988 00:20:55.988 real 0m5.492s 00:20:55.988 user 0m8.303s 00:20:55.988 sys 0m0.772s 00:20:55.988 13:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:55.988 13:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.988 13:40:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:20:55.988 13:40:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:55.988 13:40:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:55.988 13:40:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:55.988 ************************************ 00:20:55.988 START TEST raid_superblock_test 00:20:55.988 ************************************ 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61321 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61321 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61321 ']' 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.988 13:40:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:55.989 13:40:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.989 [2024-11-20 13:40:58.698666] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:20:55.989 [2024-11-20 13:40:58.699204] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61321 ] 00:20:55.989 [2024-11-20 13:40:58.892627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.247 [2024-11-20 13:40:59.075753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.503 [2024-11-20 13:40:59.312092] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:56.503 [2024-11-20 13:40:59.312170] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.071 malloc1 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.071 [2024-11-20 13:40:59.815714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:57.071 [2024-11-20 13:40:59.815791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:57.071 [2024-11-20 13:40:59.815823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:57.071 [2024-11-20 13:40:59.815839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:57.071 [2024-11-20 13:40:59.818743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:57.071 [2024-11-20 13:40:59.818790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:57.071 pt1 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.071 malloc2 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.071 13:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.071 [2024-11-20 13:40:59.871795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:57.071 [2024-11-20 13:40:59.871890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:57.071 [2024-11-20 13:40:59.871980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:57.071 [2024-11-20 13:40:59.871999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:57.071 [2024-11-20 13:40:59.874929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:57.071 [2024-11-20 13:40:59.875001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:57.071 pt2 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.072 [2024-11-20 13:40:59.883935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:57.072 [2024-11-20 13:40:59.886410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:57.072 [2024-11-20 13:40:59.886614] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:57.072 [2024-11-20 13:40:59.886632] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:57.072 [2024-11-20 13:40:59.887007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:57.072 [2024-11-20 13:40:59.887220] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:57.072 [2024-11-20 13:40:59.887240] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:57.072 [2024-11-20 13:40:59.887429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:57.072 "name": "raid_bdev1", 00:20:57.072 "uuid": "99865afa-8aed-499f-9b57-c91c9f8f1b8f", 00:20:57.072 "strip_size_kb": 64, 00:20:57.072 "state": "online", 00:20:57.072 "raid_level": "raid0", 00:20:57.072 "superblock": true, 00:20:57.072 "num_base_bdevs": 2, 00:20:57.072 "num_base_bdevs_discovered": 2, 00:20:57.072 "num_base_bdevs_operational": 2, 00:20:57.072 "base_bdevs_list": [ 00:20:57.072 { 00:20:57.072 "name": "pt1", 00:20:57.072 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:57.072 "is_configured": true, 00:20:57.072 "data_offset": 2048, 00:20:57.072 "data_size": 63488 00:20:57.072 }, 00:20:57.072 { 00:20:57.072 "name": "pt2", 00:20:57.072 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:57.072 "is_configured": true, 00:20:57.072 "data_offset": 2048, 00:20:57.072 "data_size": 63488 00:20:57.072 } 00:20:57.072 ] 00:20:57.072 }' 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:57.072 13:40:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.641 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:57.641 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:57.641 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:57.641 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:57.641 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:57.641 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:57.641 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:57.641 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:57.641 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.641 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.641 [2024-11-20 13:41:00.412396] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:57.641 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.641 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:57.641 "name": "raid_bdev1", 00:20:57.641 "aliases": [ 00:20:57.641 "99865afa-8aed-499f-9b57-c91c9f8f1b8f" 00:20:57.641 ], 00:20:57.641 "product_name": "Raid Volume", 00:20:57.641 "block_size": 512, 00:20:57.641 "num_blocks": 126976, 00:20:57.641 "uuid": "99865afa-8aed-499f-9b57-c91c9f8f1b8f", 00:20:57.641 "assigned_rate_limits": { 00:20:57.641 "rw_ios_per_sec": 0, 00:20:57.641 "rw_mbytes_per_sec": 0, 00:20:57.641 "r_mbytes_per_sec": 0, 00:20:57.641 "w_mbytes_per_sec": 0 00:20:57.641 }, 00:20:57.641 "claimed": false, 00:20:57.641 "zoned": false, 00:20:57.642 "supported_io_types": { 00:20:57.642 "read": true, 00:20:57.642 "write": true, 00:20:57.642 "unmap": true, 00:20:57.642 "flush": true, 00:20:57.642 "reset": true, 00:20:57.642 "nvme_admin": false, 00:20:57.642 "nvme_io": false, 00:20:57.642 "nvme_io_md": false, 00:20:57.642 "write_zeroes": true, 00:20:57.642 "zcopy": false, 00:20:57.642 "get_zone_info": false, 00:20:57.642 "zone_management": false, 00:20:57.642 "zone_append": false, 00:20:57.642 "compare": false, 00:20:57.642 "compare_and_write": false, 00:20:57.642 "abort": false, 00:20:57.642 "seek_hole": false, 00:20:57.642 "seek_data": false, 00:20:57.642 "copy": false, 00:20:57.642 "nvme_iov_md": false 00:20:57.642 }, 00:20:57.642 "memory_domains": [ 00:20:57.642 { 00:20:57.642 "dma_device_id": "system", 00:20:57.642 "dma_device_type": 1 00:20:57.642 }, 00:20:57.642 { 00:20:57.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.642 "dma_device_type": 2 00:20:57.642 }, 00:20:57.642 { 00:20:57.642 "dma_device_id": "system", 00:20:57.642 "dma_device_type": 1 00:20:57.642 }, 00:20:57.642 { 00:20:57.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.642 "dma_device_type": 2 00:20:57.642 } 00:20:57.642 ], 00:20:57.642 "driver_specific": { 00:20:57.642 "raid": { 00:20:57.642 "uuid": "99865afa-8aed-499f-9b57-c91c9f8f1b8f", 00:20:57.642 "strip_size_kb": 64, 00:20:57.642 "state": "online", 00:20:57.642 "raid_level": "raid0", 00:20:57.642 "superblock": true, 00:20:57.642 "num_base_bdevs": 2, 00:20:57.642 "num_base_bdevs_discovered": 2, 00:20:57.642 "num_base_bdevs_operational": 2, 00:20:57.642 "base_bdevs_list": [ 00:20:57.642 { 00:20:57.642 "name": "pt1", 00:20:57.642 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:57.642 "is_configured": true, 00:20:57.642 "data_offset": 2048, 00:20:57.642 "data_size": 63488 00:20:57.642 }, 00:20:57.642 { 00:20:57.642 "name": "pt2", 00:20:57.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:57.642 "is_configured": true, 00:20:57.642 "data_offset": 2048, 00:20:57.642 "data_size": 63488 00:20:57.642 } 00:20:57.642 ] 00:20:57.642 } 00:20:57.642 } 00:20:57.642 }' 00:20:57.642 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:57.642 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:57.642 pt2' 00:20:57.642 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.901 [2024-11-20 13:41:00.688455] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=99865afa-8aed-499f-9b57-c91c9f8f1b8f 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 99865afa-8aed-499f-9b57-c91c9f8f1b8f ']' 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.901 [2024-11-20 13:41:00.736088] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:57.901 [2024-11-20 13:41:00.736252] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:57.901 [2024-11-20 13:41:00.736539] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:57.901 [2024-11-20 13:41:00.736613] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:57.901 [2024-11-20 13:41:00.736633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.901 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:57.902 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:57.902 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:57.902 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:57.902 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.902 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.902 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.902 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:57.902 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:57.902 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.902 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.902 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.161 [2024-11-20 13:41:00.876661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:58.161 [2024-11-20 13:41:00.879239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:58.161 [2024-11-20 13:41:00.879536] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:58.161 [2024-11-20 13:41:00.879625] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:58.161 [2024-11-20 13:41:00.879653] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:58.161 [2024-11-20 13:41:00.879672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:58.161 request: 00:20:58.161 { 00:20:58.161 "name": "raid_bdev1", 00:20:58.161 "raid_level": "raid0", 00:20:58.161 "base_bdevs": [ 00:20:58.161 "malloc1", 00:20:58.161 "malloc2" 00:20:58.161 ], 00:20:58.161 "strip_size_kb": 64, 00:20:58.161 "superblock": false, 00:20:58.161 "method": "bdev_raid_create", 00:20:58.161 "req_id": 1 00:20:58.161 } 00:20:58.161 Got JSON-RPC error response 00:20:58.161 response: 00:20:58.161 { 00:20:58.161 "code": -17, 00:20:58.161 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:58.161 } 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.161 [2024-11-20 13:41:00.944645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:58.161 [2024-11-20 13:41:00.944861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.161 [2024-11-20 13:41:00.944956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:58.161 [2024-11-20 13:41:00.945175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.161 [2024-11-20 13:41:00.948224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.161 [2024-11-20 13:41:00.948384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:58.161 [2024-11-20 13:41:00.948510] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:58.161 [2024-11-20 13:41:00.948585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:58.161 pt1 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:58.161 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:58.162 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.162 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.162 13:41:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.162 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.162 13:41:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.162 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:58.162 "name": "raid_bdev1", 00:20:58.162 "uuid": "99865afa-8aed-499f-9b57-c91c9f8f1b8f", 00:20:58.162 "strip_size_kb": 64, 00:20:58.162 "state": "configuring", 00:20:58.162 "raid_level": "raid0", 00:20:58.162 "superblock": true, 00:20:58.162 "num_base_bdevs": 2, 00:20:58.162 "num_base_bdevs_discovered": 1, 00:20:58.162 "num_base_bdevs_operational": 2, 00:20:58.162 "base_bdevs_list": [ 00:20:58.162 { 00:20:58.162 "name": "pt1", 00:20:58.162 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:58.162 "is_configured": true, 00:20:58.162 "data_offset": 2048, 00:20:58.162 "data_size": 63488 00:20:58.162 }, 00:20:58.162 { 00:20:58.162 "name": null, 00:20:58.162 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:58.162 "is_configured": false, 00:20:58.162 "data_offset": 2048, 00:20:58.162 "data_size": 63488 00:20:58.162 } 00:20:58.162 ] 00:20:58.162 }' 00:20:58.162 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:58.162 13:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.730 [2024-11-20 13:41:01.481051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:58.730 [2024-11-20 13:41:01.481144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.730 [2024-11-20 13:41:01.481178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:58.730 [2024-11-20 13:41:01.481196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.730 [2024-11-20 13:41:01.481766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.730 [2024-11-20 13:41:01.481805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:58.730 [2024-11-20 13:41:01.481957] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:58.730 [2024-11-20 13:41:01.481999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:58.730 [2024-11-20 13:41:01.482143] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:58.730 [2024-11-20 13:41:01.482163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:58.730 [2024-11-20 13:41:01.482459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:58.730 [2024-11-20 13:41:01.482651] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:58.730 [2024-11-20 13:41:01.482672] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:58.730 [2024-11-20 13:41:01.482839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:58.730 pt2 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:58.730 "name": "raid_bdev1", 00:20:58.730 "uuid": "99865afa-8aed-499f-9b57-c91c9f8f1b8f", 00:20:58.730 "strip_size_kb": 64, 00:20:58.730 "state": "online", 00:20:58.730 "raid_level": "raid0", 00:20:58.730 "superblock": true, 00:20:58.730 "num_base_bdevs": 2, 00:20:58.730 "num_base_bdevs_discovered": 2, 00:20:58.730 "num_base_bdevs_operational": 2, 00:20:58.730 "base_bdevs_list": [ 00:20:58.730 { 00:20:58.730 "name": "pt1", 00:20:58.730 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:58.730 "is_configured": true, 00:20:58.730 "data_offset": 2048, 00:20:58.730 "data_size": 63488 00:20:58.730 }, 00:20:58.730 { 00:20:58.730 "name": "pt2", 00:20:58.730 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:58.730 "is_configured": true, 00:20:58.730 "data_offset": 2048, 00:20:58.730 "data_size": 63488 00:20:58.730 } 00:20:58.730 ] 00:20:58.730 }' 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:58.730 13:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.300 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:59.300 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:59.300 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:59.300 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:59.300 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:59.300 13:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:59.300 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:59.300 13:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.300 13:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.300 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:59.300 [2024-11-20 13:41:02.009529] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:59.300 13:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.300 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:59.300 "name": "raid_bdev1", 00:20:59.300 "aliases": [ 00:20:59.300 "99865afa-8aed-499f-9b57-c91c9f8f1b8f" 00:20:59.300 ], 00:20:59.300 "product_name": "Raid Volume", 00:20:59.300 "block_size": 512, 00:20:59.300 "num_blocks": 126976, 00:20:59.300 "uuid": "99865afa-8aed-499f-9b57-c91c9f8f1b8f", 00:20:59.300 "assigned_rate_limits": { 00:20:59.300 "rw_ios_per_sec": 0, 00:20:59.300 "rw_mbytes_per_sec": 0, 00:20:59.300 "r_mbytes_per_sec": 0, 00:20:59.300 "w_mbytes_per_sec": 0 00:20:59.300 }, 00:20:59.300 "claimed": false, 00:20:59.300 "zoned": false, 00:20:59.300 "supported_io_types": { 00:20:59.300 "read": true, 00:20:59.300 "write": true, 00:20:59.300 "unmap": true, 00:20:59.300 "flush": true, 00:20:59.300 "reset": true, 00:20:59.300 "nvme_admin": false, 00:20:59.300 "nvme_io": false, 00:20:59.300 "nvme_io_md": false, 00:20:59.300 "write_zeroes": true, 00:20:59.300 "zcopy": false, 00:20:59.300 "get_zone_info": false, 00:20:59.300 "zone_management": false, 00:20:59.300 "zone_append": false, 00:20:59.300 "compare": false, 00:20:59.300 "compare_and_write": false, 00:20:59.300 "abort": false, 00:20:59.300 "seek_hole": false, 00:20:59.300 "seek_data": false, 00:20:59.300 "copy": false, 00:20:59.300 "nvme_iov_md": false 00:20:59.300 }, 00:20:59.300 "memory_domains": [ 00:20:59.300 { 00:20:59.300 "dma_device_id": "system", 00:20:59.300 "dma_device_type": 1 00:20:59.300 }, 00:20:59.300 { 00:20:59.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.300 "dma_device_type": 2 00:20:59.300 }, 00:20:59.300 { 00:20:59.300 "dma_device_id": "system", 00:20:59.300 "dma_device_type": 1 00:20:59.300 }, 00:20:59.300 { 00:20:59.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.300 "dma_device_type": 2 00:20:59.300 } 00:20:59.300 ], 00:20:59.300 "driver_specific": { 00:20:59.300 "raid": { 00:20:59.300 "uuid": "99865afa-8aed-499f-9b57-c91c9f8f1b8f", 00:20:59.300 "strip_size_kb": 64, 00:20:59.300 "state": "online", 00:20:59.300 "raid_level": "raid0", 00:20:59.300 "superblock": true, 00:20:59.300 "num_base_bdevs": 2, 00:20:59.300 "num_base_bdevs_discovered": 2, 00:20:59.300 "num_base_bdevs_operational": 2, 00:20:59.300 "base_bdevs_list": [ 00:20:59.300 { 00:20:59.300 "name": "pt1", 00:20:59.300 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:59.300 "is_configured": true, 00:20:59.300 "data_offset": 2048, 00:20:59.300 "data_size": 63488 00:20:59.300 }, 00:20:59.300 { 00:20:59.300 "name": "pt2", 00:20:59.300 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:59.300 "is_configured": true, 00:20:59.300 "data_offset": 2048, 00:20:59.300 "data_size": 63488 00:20:59.300 } 00:20:59.300 ] 00:20:59.300 } 00:20:59.300 } 00:20:59.300 }' 00:20:59.300 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:59.300 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:59.300 pt2' 00:20:59.300 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:59.300 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:59.300 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:59.300 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:59.300 13:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.300 13:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.300 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:59.301 13:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:59.561 [2024-11-20 13:41:02.281624] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 99865afa-8aed-499f-9b57-c91c9f8f1b8f '!=' 99865afa-8aed-499f-9b57-c91c9f8f1b8f ']' 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61321 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61321 ']' 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61321 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61321 00:20:59.561 killing process with pid 61321 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61321' 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61321 00:20:59.561 [2024-11-20 13:41:02.365802] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:59.561 [2024-11-20 13:41:02.365957] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:59.561 13:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61321 00:20:59.561 [2024-11-20 13:41:02.366036] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:59.561 [2024-11-20 13:41:02.366060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:59.820 [2024-11-20 13:41:02.557351] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:00.757 13:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:21:00.757 00:21:00.757 real 0m5.001s 00:21:00.757 user 0m7.401s 00:21:00.757 sys 0m0.758s 00:21:00.757 13:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.757 13:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.757 ************************************ 00:21:00.757 END TEST raid_superblock_test 00:21:00.757 ************************************ 00:21:00.757 13:41:03 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:21:00.757 13:41:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:00.757 13:41:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.757 13:41:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:00.757 ************************************ 00:21:00.757 START TEST raid_read_error_test 00:21:00.757 ************************************ 00:21:00.757 13:41:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:21:00.757 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:21:00.757 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:21:00.757 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:21:00.757 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:00.757 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:00.757 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:00.757 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:00.757 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:00.757 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:00.757 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:00.757 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:00.757 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:00.757 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:00.757 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:00.758 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:00.758 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:00.758 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:00.758 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:00.758 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:21:00.758 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:21:00.758 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:21:00.758 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:00.758 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oL5uUYt6Yq 00:21:00.758 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61538 00:21:00.758 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61538 00:21:00.758 13:41:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61538 ']' 00:21:00.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.758 13:41:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.758 13:41:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.758 13:41:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.758 13:41:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.758 13:41:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.758 13:41:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:01.016 [2024-11-20 13:41:03.769771] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:21:01.016 [2024-11-20 13:41:03.770287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61538 ] 00:21:01.275 [2024-11-20 13:41:03.959801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.275 [2024-11-20 13:41:04.092477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.534 [2024-11-20 13:41:04.302041] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:01.534 [2024-11-20 13:41:04.302099] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:02.103 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:02.103 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:21:02.103 13:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:02.103 13:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:02.103 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.103 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.103 BaseBdev1_malloc 00:21:02.103 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.104 true 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.104 [2024-11-20 13:41:04.822876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:02.104 [2024-11-20 13:41:04.822982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.104 [2024-11-20 13:41:04.823024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:02.104 [2024-11-20 13:41:04.823042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.104 [2024-11-20 13:41:04.826011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.104 [2024-11-20 13:41:04.826204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:02.104 BaseBdev1 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.104 BaseBdev2_malloc 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.104 true 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.104 [2024-11-20 13:41:04.879698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:02.104 [2024-11-20 13:41:04.879769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.104 [2024-11-20 13:41:04.879794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:02.104 [2024-11-20 13:41:04.879811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.104 [2024-11-20 13:41:04.882618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.104 [2024-11-20 13:41:04.882669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:02.104 BaseBdev2 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.104 [2024-11-20 13:41:04.891769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:02.104 [2024-11-20 13:41:04.894330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:02.104 [2024-11-20 13:41:04.894576] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:02.104 [2024-11-20 13:41:04.894610] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:02.104 [2024-11-20 13:41:04.894945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:02.104 [2024-11-20 13:41:04.895182] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:02.104 [2024-11-20 13:41:04.895205] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:02.104 [2024-11-20 13:41:04.895395] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.104 "name": "raid_bdev1", 00:21:02.104 "uuid": "c2552b55-b847-4d0b-936c-1066dd0032fd", 00:21:02.104 "strip_size_kb": 64, 00:21:02.104 "state": "online", 00:21:02.104 "raid_level": "raid0", 00:21:02.104 "superblock": true, 00:21:02.104 "num_base_bdevs": 2, 00:21:02.104 "num_base_bdevs_discovered": 2, 00:21:02.104 "num_base_bdevs_operational": 2, 00:21:02.104 "base_bdevs_list": [ 00:21:02.104 { 00:21:02.104 "name": "BaseBdev1", 00:21:02.104 "uuid": "32312a88-cf00-5ce0-9666-eff72c9778c5", 00:21:02.104 "is_configured": true, 00:21:02.104 "data_offset": 2048, 00:21:02.104 "data_size": 63488 00:21:02.104 }, 00:21:02.104 { 00:21:02.104 "name": "BaseBdev2", 00:21:02.104 "uuid": "aa83307f-faa7-5064-83f2-0a2d0d96c1bd", 00:21:02.104 "is_configured": true, 00:21:02.104 "data_offset": 2048, 00:21:02.104 "data_size": 63488 00:21:02.104 } 00:21:02.104 ] 00:21:02.104 }' 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.104 13:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.677 13:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:02.677 13:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:02.677 [2024-11-20 13:41:05.521341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:03.617 13:41:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:03.617 13:41:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.617 13:41:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.617 13:41:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.617 13:41:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:03.617 13:41:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:21:03.617 13:41:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:21:03.617 13:41:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:21:03.617 13:41:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:03.617 13:41:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:03.617 13:41:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:03.617 13:41:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:03.617 13:41:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:03.617 13:41:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:03.617 13:41:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:03.618 13:41:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:03.618 13:41:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:03.618 13:41:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.618 13:41:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.618 13:41:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.618 13:41:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.618 13:41:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.618 13:41:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:03.618 "name": "raid_bdev1", 00:21:03.618 "uuid": "c2552b55-b847-4d0b-936c-1066dd0032fd", 00:21:03.618 "strip_size_kb": 64, 00:21:03.618 "state": "online", 00:21:03.618 "raid_level": "raid0", 00:21:03.618 "superblock": true, 00:21:03.618 "num_base_bdevs": 2, 00:21:03.618 "num_base_bdevs_discovered": 2, 00:21:03.618 "num_base_bdevs_operational": 2, 00:21:03.618 "base_bdevs_list": [ 00:21:03.618 { 00:21:03.618 "name": "BaseBdev1", 00:21:03.618 "uuid": "32312a88-cf00-5ce0-9666-eff72c9778c5", 00:21:03.618 "is_configured": true, 00:21:03.618 "data_offset": 2048, 00:21:03.618 "data_size": 63488 00:21:03.618 }, 00:21:03.618 { 00:21:03.618 "name": "BaseBdev2", 00:21:03.618 "uuid": "aa83307f-faa7-5064-83f2-0a2d0d96c1bd", 00:21:03.618 "is_configured": true, 00:21:03.618 "data_offset": 2048, 00:21:03.618 "data_size": 63488 00:21:03.618 } 00:21:03.618 ] 00:21:03.618 }' 00:21:03.618 13:41:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:03.618 13:41:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.187 13:41:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:04.187 13:41:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.187 13:41:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.187 [2024-11-20 13:41:06.977172] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:04.187 [2024-11-20 13:41:06.977213] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:04.187 { 00:21:04.187 "results": [ 00:21:04.187 { 00:21:04.187 "job": "raid_bdev1", 00:21:04.187 "core_mask": "0x1", 00:21:04.187 "workload": "randrw", 00:21:04.187 "percentage": 50, 00:21:04.187 "status": "finished", 00:21:04.187 "queue_depth": 1, 00:21:04.187 "io_size": 131072, 00:21:04.187 "runtime": 1.453488, 00:21:04.187 "iops": 10846.322776658631, 00:21:04.187 "mibps": 1355.790347082329, 00:21:04.187 "io_failed": 1, 00:21:04.187 "io_timeout": 0, 00:21:04.187 "avg_latency_us": 128.2415471728576, 00:21:04.187 "min_latency_us": 38.4, 00:21:04.187 "max_latency_us": 1854.370909090909 00:21:04.187 } 00:21:04.187 ], 00:21:04.187 "core_count": 1 00:21:04.187 } 00:21:04.187 [2024-11-20 13:41:06.980691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:04.187 [2024-11-20 13:41:06.980743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.187 [2024-11-20 13:41:06.980784] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:04.188 [2024-11-20 13:41:06.980801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:04.188 13:41:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.188 13:41:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61538 00:21:04.188 13:41:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61538 ']' 00:21:04.188 13:41:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61538 00:21:04.188 13:41:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:21:04.188 13:41:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.188 13:41:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61538 00:21:04.188 killing process with pid 61538 00:21:04.188 13:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:04.188 13:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:04.188 13:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61538' 00:21:04.188 13:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61538 00:21:04.188 [2024-11-20 13:41:07.022068] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:04.188 13:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61538 00:21:04.447 [2024-11-20 13:41:07.132220] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:05.383 13:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oL5uUYt6Yq 00:21:05.383 13:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:05.383 13:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:05.383 13:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:21:05.383 13:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:21:05.383 ************************************ 00:21:05.383 END TEST raid_read_error_test 00:21:05.383 ************************************ 00:21:05.383 13:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:05.383 13:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:05.383 13:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:21:05.383 00:21:05.383 real 0m4.591s 00:21:05.383 user 0m5.772s 00:21:05.383 sys 0m0.569s 00:21:05.383 13:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.383 13:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.383 13:41:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:21:05.383 13:41:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:05.383 13:41:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.383 13:41:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:05.383 ************************************ 00:21:05.383 START TEST raid_write_error_test 00:21:05.383 ************************************ 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:05.383 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.PTXzUiO4mB 00:21:05.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.642 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61678 00:21:05.642 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61678 00:21:05.642 13:41:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61678 ']' 00:21:05.642 13:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:05.642 13:41:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.642 13:41:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:05.642 13:41:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.642 13:41:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:05.642 13:41:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.642 [2024-11-20 13:41:08.406346] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:21:05.642 [2024-11-20 13:41:08.406517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61678 ] 00:21:05.902 [2024-11-20 13:41:08.587825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.902 [2024-11-20 13:41:08.711025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.161 [2024-11-20 13:41:08.912838] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:06.161 [2024-11-20 13:41:08.912922] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:06.738 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.738 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:21:06.738 13:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:06.738 13:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:06.738 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.738 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.738 BaseBdev1_malloc 00:21:06.738 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.738 13:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:06.738 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.738 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.738 true 00:21:06.738 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.738 13:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:06.738 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.738 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.738 [2024-11-20 13:41:09.448188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:06.738 [2024-11-20 13:41:09.448320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.738 [2024-11-20 13:41:09.448348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:06.738 [2024-11-20 13:41:09.448366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.738 [2024-11-20 13:41:09.451199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.739 [2024-11-20 13:41:09.451382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:06.739 BaseBdev1 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.739 BaseBdev2_malloc 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.739 true 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.739 [2024-11-20 13:41:09.513342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:06.739 [2024-11-20 13:41:09.513426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.739 [2024-11-20 13:41:09.513450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:06.739 [2024-11-20 13:41:09.513484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.739 [2024-11-20 13:41:09.516367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.739 [2024-11-20 13:41:09.516433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:06.739 BaseBdev2 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.739 [2024-11-20 13:41:09.521401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:06.739 [2024-11-20 13:41:09.524003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:06.739 [2024-11-20 13:41:09.524261] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:06.739 [2024-11-20 13:41:09.524301] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:06.739 [2024-11-20 13:41:09.524564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:06.739 [2024-11-20 13:41:09.524781] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:06.739 [2024-11-20 13:41:09.524809] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:06.739 [2024-11-20 13:41:09.525022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.739 "name": "raid_bdev1", 00:21:06.739 "uuid": "e2940600-6009-4588-b8f3-5ace5d4360a3", 00:21:06.739 "strip_size_kb": 64, 00:21:06.739 "state": "online", 00:21:06.739 "raid_level": "raid0", 00:21:06.739 "superblock": true, 00:21:06.739 "num_base_bdevs": 2, 00:21:06.739 "num_base_bdevs_discovered": 2, 00:21:06.739 "num_base_bdevs_operational": 2, 00:21:06.739 "base_bdevs_list": [ 00:21:06.739 { 00:21:06.739 "name": "BaseBdev1", 00:21:06.739 "uuid": "660b795b-482d-5860-ab9b-1ed6612d6817", 00:21:06.739 "is_configured": true, 00:21:06.739 "data_offset": 2048, 00:21:06.739 "data_size": 63488 00:21:06.739 }, 00:21:06.739 { 00:21:06.739 "name": "BaseBdev2", 00:21:06.739 "uuid": "9c22fafd-5730-5711-8dbd-17237aa1f47f", 00:21:06.739 "is_configured": true, 00:21:06.739 "data_offset": 2048, 00:21:06.739 "data_size": 63488 00:21:06.739 } 00:21:06.739 ] 00:21:06.739 }' 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.739 13:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.307 13:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:07.307 13:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:07.307 [2024-11-20 13:41:10.182940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.240 "name": "raid_bdev1", 00:21:08.240 "uuid": "e2940600-6009-4588-b8f3-5ace5d4360a3", 00:21:08.240 "strip_size_kb": 64, 00:21:08.240 "state": "online", 00:21:08.240 "raid_level": "raid0", 00:21:08.240 "superblock": true, 00:21:08.240 "num_base_bdevs": 2, 00:21:08.240 "num_base_bdevs_discovered": 2, 00:21:08.240 "num_base_bdevs_operational": 2, 00:21:08.240 "base_bdevs_list": [ 00:21:08.240 { 00:21:08.240 "name": "BaseBdev1", 00:21:08.240 "uuid": "660b795b-482d-5860-ab9b-1ed6612d6817", 00:21:08.240 "is_configured": true, 00:21:08.240 "data_offset": 2048, 00:21:08.240 "data_size": 63488 00:21:08.240 }, 00:21:08.240 { 00:21:08.240 "name": "BaseBdev2", 00:21:08.240 "uuid": "9c22fafd-5730-5711-8dbd-17237aa1f47f", 00:21:08.240 "is_configured": true, 00:21:08.240 "data_offset": 2048, 00:21:08.240 "data_size": 63488 00:21:08.240 } 00:21:08.240 ] 00:21:08.240 }' 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.240 13:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.807 13:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:08.807 13:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.807 13:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.807 [2024-11-20 13:41:11.616988] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:08.807 [2024-11-20 13:41:11.617161] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:08.807 [2024-11-20 13:41:11.620688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:08.807 [2024-11-20 13:41:11.620743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:08.807 [2024-11-20 13:41:11.620788] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:08.807 [2024-11-20 13:41:11.620806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:08.807 { 00:21:08.807 "results": [ 00:21:08.807 { 00:21:08.807 "job": "raid_bdev1", 00:21:08.807 "core_mask": "0x1", 00:21:08.807 "workload": "randrw", 00:21:08.807 "percentage": 50, 00:21:08.807 "status": "finished", 00:21:08.807 "queue_depth": 1, 00:21:08.807 "io_size": 131072, 00:21:08.807 "runtime": 1.431843, 00:21:08.807 "iops": 10311.186352134975, 00:21:08.807 "mibps": 1288.898294016872, 00:21:08.807 "io_failed": 1, 00:21:08.807 "io_timeout": 0, 00:21:08.807 "avg_latency_us": 134.93972551796324, 00:21:08.807 "min_latency_us": 39.09818181818182, 00:21:08.807 "max_latency_us": 1854.370909090909 00:21:08.807 } 00:21:08.807 ], 00:21:08.807 "core_count": 1 00:21:08.807 } 00:21:08.807 13:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.807 13:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61678 00:21:08.807 13:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61678 ']' 00:21:08.807 13:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61678 00:21:08.807 13:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:21:08.807 13:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:08.807 13:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61678 00:21:08.807 killing process with pid 61678 00:21:08.807 13:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:08.807 13:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:08.807 13:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61678' 00:21:08.807 13:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61678 00:21:08.807 [2024-11-20 13:41:11.657142] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:08.807 13:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61678 00:21:09.066 [2024-11-20 13:41:11.775285] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:10.438 13:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.PTXzUiO4mB 00:21:10.438 13:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:10.438 13:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:10.438 13:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:21:10.438 13:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:21:10.438 13:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:10.438 13:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:10.439 13:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:21:10.439 00:21:10.439 real 0m4.638s 00:21:10.439 user 0m5.817s 00:21:10.439 sys 0m0.576s 00:21:10.439 ************************************ 00:21:10.439 END TEST raid_write_error_test 00:21:10.439 ************************************ 00:21:10.439 13:41:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:10.439 13:41:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.439 13:41:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:21:10.439 13:41:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:21:10.439 13:41:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:10.439 13:41:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.439 13:41:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:10.439 ************************************ 00:21:10.439 START TEST raid_state_function_test 00:21:10.439 ************************************ 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:10.439 Process raid pid: 61827 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61827 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61827' 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61827 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61827 ']' 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.439 13:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.439 [2024-11-20 13:41:13.110129] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:21:10.439 [2024-11-20 13:41:13.110578] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.439 [2024-11-20 13:41:13.297148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.698 [2024-11-20 13:41:13.433273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.983 [2024-11-20 13:41:13.638739] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:10.983 [2024-11-20 13:41:13.639080] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:11.241 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.241 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:21:11.241 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:11.241 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.241 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.241 [2024-11-20 13:41:14.095330] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:11.241 [2024-11-20 13:41:14.095942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:11.241 [2024-11-20 13:41:14.095977] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:11.241 [2024-11-20 13:41:14.095999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:11.241 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.241 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:21:11.241 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:11.241 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:11.241 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:11.241 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:11.241 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:11.241 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.241 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.241 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.241 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.241 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.241 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.241 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.241 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.241 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.500 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.500 "name": "Existed_Raid", 00:21:11.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.500 "strip_size_kb": 64, 00:21:11.500 "state": "configuring", 00:21:11.500 "raid_level": "concat", 00:21:11.500 "superblock": false, 00:21:11.500 "num_base_bdevs": 2, 00:21:11.500 "num_base_bdevs_discovered": 0, 00:21:11.500 "num_base_bdevs_operational": 2, 00:21:11.500 "base_bdevs_list": [ 00:21:11.500 { 00:21:11.500 "name": "BaseBdev1", 00:21:11.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.500 "is_configured": false, 00:21:11.500 "data_offset": 0, 00:21:11.500 "data_size": 0 00:21:11.500 }, 00:21:11.500 { 00:21:11.500 "name": "BaseBdev2", 00:21:11.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.500 "is_configured": false, 00:21:11.500 "data_offset": 0, 00:21:11.500 "data_size": 0 00:21:11.500 } 00:21:11.500 ] 00:21:11.500 }' 00:21:11.500 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.500 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.758 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:11.758 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.758 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.758 [2024-11-20 13:41:14.635155] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:11.758 [2024-11-20 13:41:14.635198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:11.758 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.758 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:11.758 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.758 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.758 [2024-11-20 13:41:14.643113] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:11.758 [2024-11-20 13:41:14.643170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:11.758 [2024-11-20 13:41:14.643186] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:11.758 [2024-11-20 13:41:14.643204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:11.758 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.758 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:11.758 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.758 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.018 [2024-11-20 13:41:14.688010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:12.018 BaseBdev1 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.018 [ 00:21:12.018 { 00:21:12.018 "name": "BaseBdev1", 00:21:12.018 "aliases": [ 00:21:12.018 "389c49b6-e569-452e-96bd-7b19753cd9ef" 00:21:12.018 ], 00:21:12.018 "product_name": "Malloc disk", 00:21:12.018 "block_size": 512, 00:21:12.018 "num_blocks": 65536, 00:21:12.018 "uuid": "389c49b6-e569-452e-96bd-7b19753cd9ef", 00:21:12.018 "assigned_rate_limits": { 00:21:12.018 "rw_ios_per_sec": 0, 00:21:12.018 "rw_mbytes_per_sec": 0, 00:21:12.018 "r_mbytes_per_sec": 0, 00:21:12.018 "w_mbytes_per_sec": 0 00:21:12.018 }, 00:21:12.018 "claimed": true, 00:21:12.018 "claim_type": "exclusive_write", 00:21:12.018 "zoned": false, 00:21:12.018 "supported_io_types": { 00:21:12.018 "read": true, 00:21:12.018 "write": true, 00:21:12.018 "unmap": true, 00:21:12.018 "flush": true, 00:21:12.018 "reset": true, 00:21:12.018 "nvme_admin": false, 00:21:12.018 "nvme_io": false, 00:21:12.018 "nvme_io_md": false, 00:21:12.018 "write_zeroes": true, 00:21:12.018 "zcopy": true, 00:21:12.018 "get_zone_info": false, 00:21:12.018 "zone_management": false, 00:21:12.018 "zone_append": false, 00:21:12.018 "compare": false, 00:21:12.018 "compare_and_write": false, 00:21:12.018 "abort": true, 00:21:12.018 "seek_hole": false, 00:21:12.018 "seek_data": false, 00:21:12.018 "copy": true, 00:21:12.018 "nvme_iov_md": false 00:21:12.018 }, 00:21:12.018 "memory_domains": [ 00:21:12.018 { 00:21:12.018 "dma_device_id": "system", 00:21:12.018 "dma_device_type": 1 00:21:12.018 }, 00:21:12.018 { 00:21:12.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.018 "dma_device_type": 2 00:21:12.018 } 00:21:12.018 ], 00:21:12.018 "driver_specific": {} 00:21:12.018 } 00:21:12.018 ] 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.018 "name": "Existed_Raid", 00:21:12.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.018 "strip_size_kb": 64, 00:21:12.018 "state": "configuring", 00:21:12.018 "raid_level": "concat", 00:21:12.018 "superblock": false, 00:21:12.018 "num_base_bdevs": 2, 00:21:12.018 "num_base_bdevs_discovered": 1, 00:21:12.018 "num_base_bdevs_operational": 2, 00:21:12.018 "base_bdevs_list": [ 00:21:12.018 { 00:21:12.018 "name": "BaseBdev1", 00:21:12.018 "uuid": "389c49b6-e569-452e-96bd-7b19753cd9ef", 00:21:12.018 "is_configured": true, 00:21:12.018 "data_offset": 0, 00:21:12.018 "data_size": 65536 00:21:12.018 }, 00:21:12.018 { 00:21:12.018 "name": "BaseBdev2", 00:21:12.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.018 "is_configured": false, 00:21:12.018 "data_offset": 0, 00:21:12.018 "data_size": 0 00:21:12.018 } 00:21:12.018 ] 00:21:12.018 }' 00:21:12.018 13:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.019 13:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.587 [2024-11-20 13:41:15.256700] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:12.587 [2024-11-20 13:41:15.256767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.587 [2024-11-20 13:41:15.264702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:12.587 [2024-11-20 13:41:15.267274] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:12.587 [2024-11-20 13:41:15.267554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.587 "name": "Existed_Raid", 00:21:12.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.587 "strip_size_kb": 64, 00:21:12.587 "state": "configuring", 00:21:12.587 "raid_level": "concat", 00:21:12.587 "superblock": false, 00:21:12.587 "num_base_bdevs": 2, 00:21:12.587 "num_base_bdevs_discovered": 1, 00:21:12.587 "num_base_bdevs_operational": 2, 00:21:12.587 "base_bdevs_list": [ 00:21:12.587 { 00:21:12.587 "name": "BaseBdev1", 00:21:12.587 "uuid": "389c49b6-e569-452e-96bd-7b19753cd9ef", 00:21:12.587 "is_configured": true, 00:21:12.587 "data_offset": 0, 00:21:12.587 "data_size": 65536 00:21:12.587 }, 00:21:12.587 { 00:21:12.587 "name": "BaseBdev2", 00:21:12.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.587 "is_configured": false, 00:21:12.587 "data_offset": 0, 00:21:12.587 "data_size": 0 00:21:12.587 } 00:21:12.587 ] 00:21:12.587 }' 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.587 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.154 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.155 [2024-11-20 13:41:15.803507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:13.155 [2024-11-20 13:41:15.803774] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:13.155 [2024-11-20 13:41:15.803798] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:21:13.155 [2024-11-20 13:41:15.804199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:13.155 [2024-11-20 13:41:15.804476] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:13.155 [2024-11-20 13:41:15.804497] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:13.155 [2024-11-20 13:41:15.804797] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:13.155 BaseBdev2 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.155 [ 00:21:13.155 { 00:21:13.155 "name": "BaseBdev2", 00:21:13.155 "aliases": [ 00:21:13.155 "4debef55-aaa6-4fe5-bb99-e5f245f0c8f4" 00:21:13.155 ], 00:21:13.155 "product_name": "Malloc disk", 00:21:13.155 "block_size": 512, 00:21:13.155 "num_blocks": 65536, 00:21:13.155 "uuid": "4debef55-aaa6-4fe5-bb99-e5f245f0c8f4", 00:21:13.155 "assigned_rate_limits": { 00:21:13.155 "rw_ios_per_sec": 0, 00:21:13.155 "rw_mbytes_per_sec": 0, 00:21:13.155 "r_mbytes_per_sec": 0, 00:21:13.155 "w_mbytes_per_sec": 0 00:21:13.155 }, 00:21:13.155 "claimed": true, 00:21:13.155 "claim_type": "exclusive_write", 00:21:13.155 "zoned": false, 00:21:13.155 "supported_io_types": { 00:21:13.155 "read": true, 00:21:13.155 "write": true, 00:21:13.155 "unmap": true, 00:21:13.155 "flush": true, 00:21:13.155 "reset": true, 00:21:13.155 "nvme_admin": false, 00:21:13.155 "nvme_io": false, 00:21:13.155 "nvme_io_md": false, 00:21:13.155 "write_zeroes": true, 00:21:13.155 "zcopy": true, 00:21:13.155 "get_zone_info": false, 00:21:13.155 "zone_management": false, 00:21:13.155 "zone_append": false, 00:21:13.155 "compare": false, 00:21:13.155 "compare_and_write": false, 00:21:13.155 "abort": true, 00:21:13.155 "seek_hole": false, 00:21:13.155 "seek_data": false, 00:21:13.155 "copy": true, 00:21:13.155 "nvme_iov_md": false 00:21:13.155 }, 00:21:13.155 "memory_domains": [ 00:21:13.155 { 00:21:13.155 "dma_device_id": "system", 00:21:13.155 "dma_device_type": 1 00:21:13.155 }, 00:21:13.155 { 00:21:13.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.155 "dma_device_type": 2 00:21:13.155 } 00:21:13.155 ], 00:21:13.155 "driver_specific": {} 00:21:13.155 } 00:21:13.155 ] 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.155 "name": "Existed_Raid", 00:21:13.155 "uuid": "d271c8c8-a5de-406c-be78-5451b954cf10", 00:21:13.155 "strip_size_kb": 64, 00:21:13.155 "state": "online", 00:21:13.155 "raid_level": "concat", 00:21:13.155 "superblock": false, 00:21:13.155 "num_base_bdevs": 2, 00:21:13.155 "num_base_bdevs_discovered": 2, 00:21:13.155 "num_base_bdevs_operational": 2, 00:21:13.155 "base_bdevs_list": [ 00:21:13.155 { 00:21:13.155 "name": "BaseBdev1", 00:21:13.155 "uuid": "389c49b6-e569-452e-96bd-7b19753cd9ef", 00:21:13.155 "is_configured": true, 00:21:13.155 "data_offset": 0, 00:21:13.155 "data_size": 65536 00:21:13.155 }, 00:21:13.155 { 00:21:13.155 "name": "BaseBdev2", 00:21:13.155 "uuid": "4debef55-aaa6-4fe5-bb99-e5f245f0c8f4", 00:21:13.155 "is_configured": true, 00:21:13.155 "data_offset": 0, 00:21:13.155 "data_size": 65536 00:21:13.155 } 00:21:13.155 ] 00:21:13.155 }' 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.155 13:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.722 [2024-11-20 13:41:16.376092] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:13.722 "name": "Existed_Raid", 00:21:13.722 "aliases": [ 00:21:13.722 "d271c8c8-a5de-406c-be78-5451b954cf10" 00:21:13.722 ], 00:21:13.722 "product_name": "Raid Volume", 00:21:13.722 "block_size": 512, 00:21:13.722 "num_blocks": 131072, 00:21:13.722 "uuid": "d271c8c8-a5de-406c-be78-5451b954cf10", 00:21:13.722 "assigned_rate_limits": { 00:21:13.722 "rw_ios_per_sec": 0, 00:21:13.722 "rw_mbytes_per_sec": 0, 00:21:13.722 "r_mbytes_per_sec": 0, 00:21:13.722 "w_mbytes_per_sec": 0 00:21:13.722 }, 00:21:13.722 "claimed": false, 00:21:13.722 "zoned": false, 00:21:13.722 "supported_io_types": { 00:21:13.722 "read": true, 00:21:13.722 "write": true, 00:21:13.722 "unmap": true, 00:21:13.722 "flush": true, 00:21:13.722 "reset": true, 00:21:13.722 "nvme_admin": false, 00:21:13.722 "nvme_io": false, 00:21:13.722 "nvme_io_md": false, 00:21:13.722 "write_zeroes": true, 00:21:13.722 "zcopy": false, 00:21:13.722 "get_zone_info": false, 00:21:13.722 "zone_management": false, 00:21:13.722 "zone_append": false, 00:21:13.722 "compare": false, 00:21:13.722 "compare_and_write": false, 00:21:13.722 "abort": false, 00:21:13.722 "seek_hole": false, 00:21:13.722 "seek_data": false, 00:21:13.722 "copy": false, 00:21:13.722 "nvme_iov_md": false 00:21:13.722 }, 00:21:13.722 "memory_domains": [ 00:21:13.722 { 00:21:13.722 "dma_device_id": "system", 00:21:13.722 "dma_device_type": 1 00:21:13.722 }, 00:21:13.722 { 00:21:13.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.722 "dma_device_type": 2 00:21:13.722 }, 00:21:13.722 { 00:21:13.722 "dma_device_id": "system", 00:21:13.722 "dma_device_type": 1 00:21:13.722 }, 00:21:13.722 { 00:21:13.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.722 "dma_device_type": 2 00:21:13.722 } 00:21:13.722 ], 00:21:13.722 "driver_specific": { 00:21:13.722 "raid": { 00:21:13.722 "uuid": "d271c8c8-a5de-406c-be78-5451b954cf10", 00:21:13.722 "strip_size_kb": 64, 00:21:13.722 "state": "online", 00:21:13.722 "raid_level": "concat", 00:21:13.722 "superblock": false, 00:21:13.722 "num_base_bdevs": 2, 00:21:13.722 "num_base_bdevs_discovered": 2, 00:21:13.722 "num_base_bdevs_operational": 2, 00:21:13.722 "base_bdevs_list": [ 00:21:13.722 { 00:21:13.722 "name": "BaseBdev1", 00:21:13.722 "uuid": "389c49b6-e569-452e-96bd-7b19753cd9ef", 00:21:13.722 "is_configured": true, 00:21:13.722 "data_offset": 0, 00:21:13.722 "data_size": 65536 00:21:13.722 }, 00:21:13.722 { 00:21:13.722 "name": "BaseBdev2", 00:21:13.722 "uuid": "4debef55-aaa6-4fe5-bb99-e5f245f0c8f4", 00:21:13.722 "is_configured": true, 00:21:13.722 "data_offset": 0, 00:21:13.722 "data_size": 65536 00:21:13.722 } 00:21:13.722 ] 00:21:13.722 } 00:21:13.722 } 00:21:13.722 }' 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:13.722 BaseBdev2' 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.722 13:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.981 [2024-11-20 13:41:16.643881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:13.981 [2024-11-20 13:41:16.643987] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:13.981 [2024-11-20 13:41:16.644075] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.981 "name": "Existed_Raid", 00:21:13.981 "uuid": "d271c8c8-a5de-406c-be78-5451b954cf10", 00:21:13.981 "strip_size_kb": 64, 00:21:13.981 "state": "offline", 00:21:13.981 "raid_level": "concat", 00:21:13.981 "superblock": false, 00:21:13.981 "num_base_bdevs": 2, 00:21:13.981 "num_base_bdevs_discovered": 1, 00:21:13.981 "num_base_bdevs_operational": 1, 00:21:13.981 "base_bdevs_list": [ 00:21:13.981 { 00:21:13.981 "name": null, 00:21:13.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.981 "is_configured": false, 00:21:13.981 "data_offset": 0, 00:21:13.981 "data_size": 65536 00:21:13.981 }, 00:21:13.981 { 00:21:13.981 "name": "BaseBdev2", 00:21:13.981 "uuid": "4debef55-aaa6-4fe5-bb99-e5f245f0c8f4", 00:21:13.981 "is_configured": true, 00:21:13.981 "data_offset": 0, 00:21:13.981 "data_size": 65536 00:21:13.981 } 00:21:13.981 ] 00:21:13.981 }' 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.981 13:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.565 13:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:14.565 13:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:14.565 13:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.565 13:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:14.565 13:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.565 13:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.565 13:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.565 13:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:14.565 13:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:14.565 13:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:14.565 13:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.565 13:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.565 [2024-11-20 13:41:17.335119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:14.565 [2024-11-20 13:41:17.335189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:14.565 13:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.565 13:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:14.565 13:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:14.565 13:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.565 13:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:14.565 13:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.565 13:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.565 13:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.824 13:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:14.824 13:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:14.824 13:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:14.824 13:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61827 00:21:14.824 13:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61827 ']' 00:21:14.824 13:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61827 00:21:14.824 13:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:21:14.824 13:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.824 13:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61827 00:21:14.824 killing process with pid 61827 00:21:14.824 13:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:14.824 13:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:14.824 13:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61827' 00:21:14.824 13:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61827 00:21:14.824 [2024-11-20 13:41:17.521397] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:14.824 13:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61827 00:21:14.824 [2024-11-20 13:41:17.535863] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:15.759 13:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:21:15.759 00:21:15.759 real 0m5.573s 00:21:15.759 user 0m8.340s 00:21:15.759 sys 0m0.903s 00:21:15.759 ************************************ 00:21:15.759 END TEST raid_state_function_test 00:21:15.759 ************************************ 00:21:15.759 13:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:15.759 13:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.759 13:41:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:21:15.759 13:41:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:15.759 13:41:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:15.760 13:41:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:15.760 ************************************ 00:21:15.760 START TEST raid_state_function_test_sb 00:21:15.760 ************************************ 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:15.760 Process raid pid: 62080 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62080 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62080' 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62080 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62080 ']' 00:21:15.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.760 13:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.018 [2024-11-20 13:41:18.704585] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:21:16.018 [2024-11-20 13:41:18.704747] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.018 [2024-11-20 13:41:18.880636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.277 [2024-11-20 13:41:19.042050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.534 [2024-11-20 13:41:19.280589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:16.534 [2024-11-20 13:41:19.280668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.100 [2024-11-20 13:41:19.724577] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:17.100 [2024-11-20 13:41:19.724643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:17.100 [2024-11-20 13:41:19.724661] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:17.100 [2024-11-20 13:41:19.724677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.100 "name": "Existed_Raid", 00:21:17.100 "uuid": "8fc8422b-2009-4a5e-b95f-687870a8ffc0", 00:21:17.100 "strip_size_kb": 64, 00:21:17.100 "state": "configuring", 00:21:17.100 "raid_level": "concat", 00:21:17.100 "superblock": true, 00:21:17.100 "num_base_bdevs": 2, 00:21:17.100 "num_base_bdevs_discovered": 0, 00:21:17.100 "num_base_bdevs_operational": 2, 00:21:17.100 "base_bdevs_list": [ 00:21:17.100 { 00:21:17.100 "name": "BaseBdev1", 00:21:17.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.100 "is_configured": false, 00:21:17.100 "data_offset": 0, 00:21:17.100 "data_size": 0 00:21:17.100 }, 00:21:17.100 { 00:21:17.100 "name": "BaseBdev2", 00:21:17.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.100 "is_configured": false, 00:21:17.100 "data_offset": 0, 00:21:17.100 "data_size": 0 00:21:17.100 } 00:21:17.100 ] 00:21:17.100 }' 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.100 13:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.358 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:17.358 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.358 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.358 [2024-11-20 13:41:20.188631] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:17.358 [2024-11-20 13:41:20.188675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:17.358 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.358 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:17.358 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.358 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.358 [2024-11-20 13:41:20.196636] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:17.358 [2024-11-20 13:41:20.196692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:17.358 [2024-11-20 13:41:20.196708] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:17.358 [2024-11-20 13:41:20.196727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:17.358 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.358 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:17.358 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.358 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.358 [2024-11-20 13:41:20.241270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:17.358 BaseBdev1 00:21:17.358 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.358 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:17.359 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:17.359 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:17.359 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:17.359 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:17.359 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:17.359 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:17.359 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.359 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.359 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.359 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:17.359 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.359 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.359 [ 00:21:17.359 { 00:21:17.359 "name": "BaseBdev1", 00:21:17.359 "aliases": [ 00:21:17.359 "20f202c3-bf5a-4528-9686-cf7aba2d6442" 00:21:17.359 ], 00:21:17.359 "product_name": "Malloc disk", 00:21:17.359 "block_size": 512, 00:21:17.359 "num_blocks": 65536, 00:21:17.359 "uuid": "20f202c3-bf5a-4528-9686-cf7aba2d6442", 00:21:17.359 "assigned_rate_limits": { 00:21:17.359 "rw_ios_per_sec": 0, 00:21:17.359 "rw_mbytes_per_sec": 0, 00:21:17.359 "r_mbytes_per_sec": 0, 00:21:17.359 "w_mbytes_per_sec": 0 00:21:17.359 }, 00:21:17.359 "claimed": true, 00:21:17.359 "claim_type": "exclusive_write", 00:21:17.359 "zoned": false, 00:21:17.359 "supported_io_types": { 00:21:17.359 "read": true, 00:21:17.359 "write": true, 00:21:17.359 "unmap": true, 00:21:17.359 "flush": true, 00:21:17.359 "reset": true, 00:21:17.359 "nvme_admin": false, 00:21:17.359 "nvme_io": false, 00:21:17.359 "nvme_io_md": false, 00:21:17.359 "write_zeroes": true, 00:21:17.359 "zcopy": true, 00:21:17.359 "get_zone_info": false, 00:21:17.359 "zone_management": false, 00:21:17.359 "zone_append": false, 00:21:17.359 "compare": false, 00:21:17.359 "compare_and_write": false, 00:21:17.359 "abort": true, 00:21:17.359 "seek_hole": false, 00:21:17.359 "seek_data": false, 00:21:17.359 "copy": true, 00:21:17.359 "nvme_iov_md": false 00:21:17.359 }, 00:21:17.359 "memory_domains": [ 00:21:17.359 { 00:21:17.359 "dma_device_id": "system", 00:21:17.359 "dma_device_type": 1 00:21:17.616 }, 00:21:17.616 { 00:21:17.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:17.616 "dma_device_type": 2 00:21:17.616 } 00:21:17.616 ], 00:21:17.616 "driver_specific": {} 00:21:17.616 } 00:21:17.616 ] 00:21:17.616 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.616 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:17.616 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:21:17.616 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:17.616 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.616 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:17.617 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:17.617 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:17.617 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.617 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.617 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.617 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.617 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.617 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.617 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.617 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.617 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.617 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.617 "name": "Existed_Raid", 00:21:17.617 "uuid": "4d3c42de-8119-4a26-ac9b-776de4271188", 00:21:17.617 "strip_size_kb": 64, 00:21:17.617 "state": "configuring", 00:21:17.617 "raid_level": "concat", 00:21:17.617 "superblock": true, 00:21:17.617 "num_base_bdevs": 2, 00:21:17.617 "num_base_bdevs_discovered": 1, 00:21:17.617 "num_base_bdevs_operational": 2, 00:21:17.617 "base_bdevs_list": [ 00:21:17.617 { 00:21:17.617 "name": "BaseBdev1", 00:21:17.617 "uuid": "20f202c3-bf5a-4528-9686-cf7aba2d6442", 00:21:17.617 "is_configured": true, 00:21:17.617 "data_offset": 2048, 00:21:17.617 "data_size": 63488 00:21:17.617 }, 00:21:17.617 { 00:21:17.617 "name": "BaseBdev2", 00:21:17.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.617 "is_configured": false, 00:21:17.617 "data_offset": 0, 00:21:17.617 "data_size": 0 00:21:17.617 } 00:21:17.617 ] 00:21:17.617 }' 00:21:17.617 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.617 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.875 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:17.875 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.875 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.875 [2024-11-20 13:41:20.769453] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:17.875 [2024-11-20 13:41:20.769516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:17.875 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.875 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:17.875 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.875 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.875 [2024-11-20 13:41:20.781494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:17.875 [2024-11-20 13:41:20.784134] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:17.875 [2024-11-20 13:41:20.784302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:17.875 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.875 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:17.875 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:17.875 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:21:17.875 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:17.875 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:17.875 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:17.875 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:17.875 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:17.875 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.875 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.875 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.875 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.133 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.133 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.133 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.133 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.133 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.133 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.133 "name": "Existed_Raid", 00:21:18.133 "uuid": "da4e5ba3-3cde-489f-a595-679a81e6f181", 00:21:18.133 "strip_size_kb": 64, 00:21:18.133 "state": "configuring", 00:21:18.133 "raid_level": "concat", 00:21:18.133 "superblock": true, 00:21:18.133 "num_base_bdevs": 2, 00:21:18.133 "num_base_bdevs_discovered": 1, 00:21:18.133 "num_base_bdevs_operational": 2, 00:21:18.133 "base_bdevs_list": [ 00:21:18.133 { 00:21:18.133 "name": "BaseBdev1", 00:21:18.133 "uuid": "20f202c3-bf5a-4528-9686-cf7aba2d6442", 00:21:18.133 "is_configured": true, 00:21:18.133 "data_offset": 2048, 00:21:18.133 "data_size": 63488 00:21:18.133 }, 00:21:18.133 { 00:21:18.133 "name": "BaseBdev2", 00:21:18.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.133 "is_configured": false, 00:21:18.133 "data_offset": 0, 00:21:18.133 "data_size": 0 00:21:18.133 } 00:21:18.133 ] 00:21:18.133 }' 00:21:18.133 13:41:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.133 13:41:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.392 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:18.392 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.392 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.651 [2024-11-20 13:41:21.323795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:18.651 [2024-11-20 13:41:21.324731] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:18.651 [2024-11-20 13:41:21.324752] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:18.651 [2024-11-20 13:41:21.325104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:18.651 BaseBdev2 00:21:18.651 [2024-11-20 13:41:21.325318] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:18.651 [2024-11-20 13:41:21.325341] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:18.651 [2024-11-20 13:41:21.325508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.651 [ 00:21:18.651 { 00:21:18.651 "name": "BaseBdev2", 00:21:18.651 "aliases": [ 00:21:18.651 "aac47916-6d84-4cb7-80e0-4ec1116c6500" 00:21:18.651 ], 00:21:18.651 "product_name": "Malloc disk", 00:21:18.651 "block_size": 512, 00:21:18.651 "num_blocks": 65536, 00:21:18.651 "uuid": "aac47916-6d84-4cb7-80e0-4ec1116c6500", 00:21:18.651 "assigned_rate_limits": { 00:21:18.651 "rw_ios_per_sec": 0, 00:21:18.651 "rw_mbytes_per_sec": 0, 00:21:18.651 "r_mbytes_per_sec": 0, 00:21:18.651 "w_mbytes_per_sec": 0 00:21:18.651 }, 00:21:18.651 "claimed": true, 00:21:18.651 "claim_type": "exclusive_write", 00:21:18.651 "zoned": false, 00:21:18.651 "supported_io_types": { 00:21:18.651 "read": true, 00:21:18.651 "write": true, 00:21:18.651 "unmap": true, 00:21:18.651 "flush": true, 00:21:18.651 "reset": true, 00:21:18.651 "nvme_admin": false, 00:21:18.651 "nvme_io": false, 00:21:18.651 "nvme_io_md": false, 00:21:18.651 "write_zeroes": true, 00:21:18.651 "zcopy": true, 00:21:18.651 "get_zone_info": false, 00:21:18.651 "zone_management": false, 00:21:18.651 "zone_append": false, 00:21:18.651 "compare": false, 00:21:18.651 "compare_and_write": false, 00:21:18.651 "abort": true, 00:21:18.651 "seek_hole": false, 00:21:18.651 "seek_data": false, 00:21:18.651 "copy": true, 00:21:18.651 "nvme_iov_md": false 00:21:18.651 }, 00:21:18.651 "memory_domains": [ 00:21:18.651 { 00:21:18.651 "dma_device_id": "system", 00:21:18.651 "dma_device_type": 1 00:21:18.651 }, 00:21:18.651 { 00:21:18.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.651 "dma_device_type": 2 00:21:18.651 } 00:21:18.651 ], 00:21:18.651 "driver_specific": {} 00:21:18.651 } 00:21:18.651 ] 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.651 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:18.651 "name": "Existed_Raid", 00:21:18.651 "uuid": "da4e5ba3-3cde-489f-a595-679a81e6f181", 00:21:18.651 "strip_size_kb": 64, 00:21:18.651 "state": "online", 00:21:18.651 "raid_level": "concat", 00:21:18.651 "superblock": true, 00:21:18.652 "num_base_bdevs": 2, 00:21:18.652 "num_base_bdevs_discovered": 2, 00:21:18.652 "num_base_bdevs_operational": 2, 00:21:18.652 "base_bdevs_list": [ 00:21:18.652 { 00:21:18.652 "name": "BaseBdev1", 00:21:18.652 "uuid": "20f202c3-bf5a-4528-9686-cf7aba2d6442", 00:21:18.652 "is_configured": true, 00:21:18.652 "data_offset": 2048, 00:21:18.652 "data_size": 63488 00:21:18.652 }, 00:21:18.652 { 00:21:18.652 "name": "BaseBdev2", 00:21:18.652 "uuid": "aac47916-6d84-4cb7-80e0-4ec1116c6500", 00:21:18.652 "is_configured": true, 00:21:18.652 "data_offset": 2048, 00:21:18.652 "data_size": 63488 00:21:18.652 } 00:21:18.652 ] 00:21:18.652 }' 00:21:18.652 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:18.652 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.218 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:19.218 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:19.218 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:19.218 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:19.218 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:19.218 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:19.218 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:19.218 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:19.218 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.218 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.218 [2024-11-20 13:41:21.892428] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:19.218 13:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.218 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:19.218 "name": "Existed_Raid", 00:21:19.218 "aliases": [ 00:21:19.218 "da4e5ba3-3cde-489f-a595-679a81e6f181" 00:21:19.218 ], 00:21:19.218 "product_name": "Raid Volume", 00:21:19.218 "block_size": 512, 00:21:19.218 "num_blocks": 126976, 00:21:19.218 "uuid": "da4e5ba3-3cde-489f-a595-679a81e6f181", 00:21:19.218 "assigned_rate_limits": { 00:21:19.218 "rw_ios_per_sec": 0, 00:21:19.218 "rw_mbytes_per_sec": 0, 00:21:19.218 "r_mbytes_per_sec": 0, 00:21:19.218 "w_mbytes_per_sec": 0 00:21:19.218 }, 00:21:19.218 "claimed": false, 00:21:19.218 "zoned": false, 00:21:19.218 "supported_io_types": { 00:21:19.218 "read": true, 00:21:19.218 "write": true, 00:21:19.218 "unmap": true, 00:21:19.218 "flush": true, 00:21:19.218 "reset": true, 00:21:19.218 "nvme_admin": false, 00:21:19.218 "nvme_io": false, 00:21:19.218 "nvme_io_md": false, 00:21:19.218 "write_zeroes": true, 00:21:19.218 "zcopy": false, 00:21:19.218 "get_zone_info": false, 00:21:19.218 "zone_management": false, 00:21:19.218 "zone_append": false, 00:21:19.218 "compare": false, 00:21:19.218 "compare_and_write": false, 00:21:19.218 "abort": false, 00:21:19.218 "seek_hole": false, 00:21:19.218 "seek_data": false, 00:21:19.218 "copy": false, 00:21:19.218 "nvme_iov_md": false 00:21:19.218 }, 00:21:19.218 "memory_domains": [ 00:21:19.218 { 00:21:19.218 "dma_device_id": "system", 00:21:19.218 "dma_device_type": 1 00:21:19.218 }, 00:21:19.218 { 00:21:19.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.218 "dma_device_type": 2 00:21:19.218 }, 00:21:19.218 { 00:21:19.218 "dma_device_id": "system", 00:21:19.218 "dma_device_type": 1 00:21:19.218 }, 00:21:19.218 { 00:21:19.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.218 "dma_device_type": 2 00:21:19.218 } 00:21:19.218 ], 00:21:19.218 "driver_specific": { 00:21:19.218 "raid": { 00:21:19.218 "uuid": "da4e5ba3-3cde-489f-a595-679a81e6f181", 00:21:19.218 "strip_size_kb": 64, 00:21:19.218 "state": "online", 00:21:19.218 "raid_level": "concat", 00:21:19.218 "superblock": true, 00:21:19.219 "num_base_bdevs": 2, 00:21:19.219 "num_base_bdevs_discovered": 2, 00:21:19.219 "num_base_bdevs_operational": 2, 00:21:19.219 "base_bdevs_list": [ 00:21:19.219 { 00:21:19.219 "name": "BaseBdev1", 00:21:19.219 "uuid": "20f202c3-bf5a-4528-9686-cf7aba2d6442", 00:21:19.219 "is_configured": true, 00:21:19.219 "data_offset": 2048, 00:21:19.219 "data_size": 63488 00:21:19.219 }, 00:21:19.219 { 00:21:19.219 "name": "BaseBdev2", 00:21:19.219 "uuid": "aac47916-6d84-4cb7-80e0-4ec1116c6500", 00:21:19.219 "is_configured": true, 00:21:19.219 "data_offset": 2048, 00:21:19.219 "data_size": 63488 00:21:19.219 } 00:21:19.219 ] 00:21:19.219 } 00:21:19.219 } 00:21:19.219 }' 00:21:19.219 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:19.219 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:19.219 BaseBdev2' 00:21:19.219 13:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:19.219 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:19.219 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:19.219 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:19.219 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.219 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:19.219 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.219 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.219 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:19.219 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:19.219 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:19.219 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:19.219 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:19.219 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.219 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.219 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.477 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:19.477 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:19.477 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:19.477 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.477 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.477 [2024-11-20 13:41:22.148258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:19.477 [2024-11-20 13:41:22.148302] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:19.477 [2024-11-20 13:41:22.148369] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:19.477 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.477 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:19.477 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:21:19.477 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:19.477 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:21:19.477 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:21:19.477 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:21:19.477 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:19.477 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:21:19.478 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:19.478 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:19.478 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:19.478 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.478 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.478 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.478 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.478 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.478 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.478 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.478 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.478 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.478 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.478 "name": "Existed_Raid", 00:21:19.478 "uuid": "da4e5ba3-3cde-489f-a595-679a81e6f181", 00:21:19.478 "strip_size_kb": 64, 00:21:19.478 "state": "offline", 00:21:19.478 "raid_level": "concat", 00:21:19.478 "superblock": true, 00:21:19.478 "num_base_bdevs": 2, 00:21:19.478 "num_base_bdevs_discovered": 1, 00:21:19.478 "num_base_bdevs_operational": 1, 00:21:19.478 "base_bdevs_list": [ 00:21:19.478 { 00:21:19.478 "name": null, 00:21:19.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.478 "is_configured": false, 00:21:19.478 "data_offset": 0, 00:21:19.478 "data_size": 63488 00:21:19.478 }, 00:21:19.478 { 00:21:19.478 "name": "BaseBdev2", 00:21:19.478 "uuid": "aac47916-6d84-4cb7-80e0-4ec1116c6500", 00:21:19.478 "is_configured": true, 00:21:19.478 "data_offset": 2048, 00:21:19.478 "data_size": 63488 00:21:19.478 } 00:21:19.478 ] 00:21:19.478 }' 00:21:19.478 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.478 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:20.045 [2024-11-20 13:41:22.812544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:20.045 [2024-11-20 13:41:22.812747] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62080 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62080 ']' 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62080 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:20.045 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.303 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62080 00:21:20.303 killing process with pid 62080 00:21:20.303 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:20.303 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:20.303 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62080' 00:21:20.303 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62080 00:21:20.303 [2024-11-20 13:41:22.989952] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:20.303 13:41:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62080 00:21:20.303 [2024-11-20 13:41:23.004880] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:21.239 ************************************ 00:21:21.239 END TEST raid_state_function_test_sb 00:21:21.239 ************************************ 00:21:21.239 13:41:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:21:21.239 00:21:21.239 real 0m5.469s 00:21:21.239 user 0m8.226s 00:21:21.239 sys 0m0.785s 00:21:21.239 13:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:21.239 13:41:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:21.239 13:41:24 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:21:21.239 13:41:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:21.239 13:41:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:21.239 13:41:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:21.239 ************************************ 00:21:21.239 START TEST raid_superblock_test 00:21:21.239 ************************************ 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62338 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62338 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62338 ']' 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.239 13:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.498 [2024-11-20 13:41:24.236306] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:21:21.498 [2024-11-20 13:41:24.237339] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62338 ] 00:21:21.757 [2024-11-20 13:41:24.424065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.757 [2024-11-20 13:41:24.554293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.016 [2024-11-20 13:41:24.757776] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:22.016 [2024-11-20 13:41:24.758037] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.584 malloc1 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.584 [2024-11-20 13:41:25.294966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:22.584 [2024-11-20 13:41:25.295036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.584 [2024-11-20 13:41:25.295069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:22.584 [2024-11-20 13:41:25.295086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.584 [2024-11-20 13:41:25.297976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.584 [2024-11-20 13:41:25.298034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:22.584 pt1 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.584 malloc2 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.584 [2024-11-20 13:41:25.350875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:22.584 [2024-11-20 13:41:25.351126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.584 [2024-11-20 13:41:25.351210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:22.584 [2024-11-20 13:41:25.351423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.584 [2024-11-20 13:41:25.354190] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.584 [2024-11-20 13:41:25.354346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:22.584 pt2 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.584 [2024-11-20 13:41:25.363104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:22.584 [2024-11-20 13:41:25.365535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:22.584 [2024-11-20 13:41:25.365741] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:22.584 [2024-11-20 13:41:25.365761] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:22.584 [2024-11-20 13:41:25.366077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:22.584 [2024-11-20 13:41:25.366278] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:22.584 [2024-11-20 13:41:25.366306] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:22.584 [2024-11-20 13:41:25.366489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.584 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:22.584 "name": "raid_bdev1", 00:21:22.584 "uuid": "f1bf6aaf-b6fd-4dcd-8ee1-0a3c0aae7e95", 00:21:22.584 "strip_size_kb": 64, 00:21:22.584 "state": "online", 00:21:22.584 "raid_level": "concat", 00:21:22.584 "superblock": true, 00:21:22.584 "num_base_bdevs": 2, 00:21:22.584 "num_base_bdevs_discovered": 2, 00:21:22.584 "num_base_bdevs_operational": 2, 00:21:22.584 "base_bdevs_list": [ 00:21:22.585 { 00:21:22.585 "name": "pt1", 00:21:22.585 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:22.585 "is_configured": true, 00:21:22.585 "data_offset": 2048, 00:21:22.585 "data_size": 63488 00:21:22.585 }, 00:21:22.585 { 00:21:22.585 "name": "pt2", 00:21:22.585 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:22.585 "is_configured": true, 00:21:22.585 "data_offset": 2048, 00:21:22.585 "data_size": 63488 00:21:22.585 } 00:21:22.585 ] 00:21:22.585 }' 00:21:22.585 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:22.585 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.154 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:23.154 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:23.154 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:23.154 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:23.154 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:23.154 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:23.154 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:23.154 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:23.154 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.154 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.154 [2024-11-20 13:41:25.839540] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:23.154 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.154 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:23.154 "name": "raid_bdev1", 00:21:23.154 "aliases": [ 00:21:23.154 "f1bf6aaf-b6fd-4dcd-8ee1-0a3c0aae7e95" 00:21:23.154 ], 00:21:23.154 "product_name": "Raid Volume", 00:21:23.154 "block_size": 512, 00:21:23.154 "num_blocks": 126976, 00:21:23.154 "uuid": "f1bf6aaf-b6fd-4dcd-8ee1-0a3c0aae7e95", 00:21:23.154 "assigned_rate_limits": { 00:21:23.154 "rw_ios_per_sec": 0, 00:21:23.154 "rw_mbytes_per_sec": 0, 00:21:23.154 "r_mbytes_per_sec": 0, 00:21:23.154 "w_mbytes_per_sec": 0 00:21:23.154 }, 00:21:23.154 "claimed": false, 00:21:23.154 "zoned": false, 00:21:23.154 "supported_io_types": { 00:21:23.154 "read": true, 00:21:23.154 "write": true, 00:21:23.154 "unmap": true, 00:21:23.154 "flush": true, 00:21:23.154 "reset": true, 00:21:23.154 "nvme_admin": false, 00:21:23.154 "nvme_io": false, 00:21:23.154 "nvme_io_md": false, 00:21:23.154 "write_zeroes": true, 00:21:23.154 "zcopy": false, 00:21:23.154 "get_zone_info": false, 00:21:23.154 "zone_management": false, 00:21:23.154 "zone_append": false, 00:21:23.154 "compare": false, 00:21:23.154 "compare_and_write": false, 00:21:23.154 "abort": false, 00:21:23.154 "seek_hole": false, 00:21:23.154 "seek_data": false, 00:21:23.154 "copy": false, 00:21:23.154 "nvme_iov_md": false 00:21:23.154 }, 00:21:23.154 "memory_domains": [ 00:21:23.154 { 00:21:23.154 "dma_device_id": "system", 00:21:23.154 "dma_device_type": 1 00:21:23.154 }, 00:21:23.154 { 00:21:23.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.154 "dma_device_type": 2 00:21:23.154 }, 00:21:23.154 { 00:21:23.154 "dma_device_id": "system", 00:21:23.154 "dma_device_type": 1 00:21:23.154 }, 00:21:23.154 { 00:21:23.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.154 "dma_device_type": 2 00:21:23.154 } 00:21:23.154 ], 00:21:23.154 "driver_specific": { 00:21:23.154 "raid": { 00:21:23.154 "uuid": "f1bf6aaf-b6fd-4dcd-8ee1-0a3c0aae7e95", 00:21:23.154 "strip_size_kb": 64, 00:21:23.154 "state": "online", 00:21:23.154 "raid_level": "concat", 00:21:23.154 "superblock": true, 00:21:23.154 "num_base_bdevs": 2, 00:21:23.154 "num_base_bdevs_discovered": 2, 00:21:23.154 "num_base_bdevs_operational": 2, 00:21:23.154 "base_bdevs_list": [ 00:21:23.154 { 00:21:23.154 "name": "pt1", 00:21:23.154 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:23.154 "is_configured": true, 00:21:23.154 "data_offset": 2048, 00:21:23.154 "data_size": 63488 00:21:23.154 }, 00:21:23.154 { 00:21:23.154 "name": "pt2", 00:21:23.154 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:23.154 "is_configured": true, 00:21:23.154 "data_offset": 2048, 00:21:23.154 "data_size": 63488 00:21:23.154 } 00:21:23.154 ] 00:21:23.154 } 00:21:23.154 } 00:21:23.154 }' 00:21:23.154 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:23.154 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:23.154 pt2' 00:21:23.154 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:23.154 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:23.154 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:23.154 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:23.154 13:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:23.154 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.154 13:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.154 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.154 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:23.154 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:23.154 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:23.154 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:23.154 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:23.154 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.154 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.154 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:23.413 [2024-11-20 13:41:26.099842] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f1bf6aaf-b6fd-4dcd-8ee1-0a3c0aae7e95 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f1bf6aaf-b6fd-4dcd-8ee1-0a3c0aae7e95 ']' 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.413 [2024-11-20 13:41:26.151325] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:23.413 [2024-11-20 13:41:26.151368] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:23.413 [2024-11-20 13:41:26.151518] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:23.413 [2024-11-20 13:41:26.151616] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:23.413 [2024-11-20 13:41:26.151646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.413 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.413 [2024-11-20 13:41:26.287342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:23.413 [2024-11-20 13:41:26.289948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:23.413 [2024-11-20 13:41:26.290036] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:23.413 [2024-11-20 13:41:26.290111] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:23.413 [2024-11-20 13:41:26.290149] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:23.413 [2024-11-20 13:41:26.290166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:23.414 request: 00:21:23.414 { 00:21:23.414 "name": "raid_bdev1", 00:21:23.414 "raid_level": "concat", 00:21:23.414 "base_bdevs": [ 00:21:23.414 "malloc1", 00:21:23.414 "malloc2" 00:21:23.414 ], 00:21:23.414 "strip_size_kb": 64, 00:21:23.414 "superblock": false, 00:21:23.414 "method": "bdev_raid_create", 00:21:23.414 "req_id": 1 00:21:23.414 } 00:21:23.414 Got JSON-RPC error response 00:21:23.414 response: 00:21:23.414 { 00:21:23.414 "code": -17, 00:21:23.414 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:23.414 } 00:21:23.414 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:23.414 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:21:23.414 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:23.414 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:23.414 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:23.414 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.414 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.414 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.414 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:23.414 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.672 [2024-11-20 13:41:26.371348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:23.672 [2024-11-20 13:41:26.371567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.672 [2024-11-20 13:41:26.371740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:23.672 [2024-11-20 13:41:26.371877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.672 [2024-11-20 13:41:26.374819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.672 [2024-11-20 13:41:26.375104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:23.672 [2024-11-20 13:41:26.375326] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:23.672 [2024-11-20 13:41:26.375515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:23.672 pt1 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.672 "name": "raid_bdev1", 00:21:23.672 "uuid": "f1bf6aaf-b6fd-4dcd-8ee1-0a3c0aae7e95", 00:21:23.672 "strip_size_kb": 64, 00:21:23.672 "state": "configuring", 00:21:23.672 "raid_level": "concat", 00:21:23.672 "superblock": true, 00:21:23.672 "num_base_bdevs": 2, 00:21:23.672 "num_base_bdevs_discovered": 1, 00:21:23.672 "num_base_bdevs_operational": 2, 00:21:23.672 "base_bdevs_list": [ 00:21:23.672 { 00:21:23.672 "name": "pt1", 00:21:23.672 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:23.672 "is_configured": true, 00:21:23.672 "data_offset": 2048, 00:21:23.672 "data_size": 63488 00:21:23.672 }, 00:21:23.672 { 00:21:23.672 "name": null, 00:21:23.672 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:23.672 "is_configured": false, 00:21:23.672 "data_offset": 2048, 00:21:23.672 "data_size": 63488 00:21:23.672 } 00:21:23.672 ] 00:21:23.672 }' 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.672 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.239 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:24.239 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:24.239 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:24.239 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:24.239 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.239 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.239 [2024-11-20 13:41:26.883610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:24.239 [2024-11-20 13:41:26.883701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.239 [2024-11-20 13:41:26.883734] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:24.239 [2024-11-20 13:41:26.883753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.239 [2024-11-20 13:41:26.884346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.239 [2024-11-20 13:41:26.884384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:24.239 [2024-11-20 13:41:26.884486] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:24.239 [2024-11-20 13:41:26.884535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:24.239 [2024-11-20 13:41:26.884688] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:24.239 [2024-11-20 13:41:26.884715] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:24.239 [2024-11-20 13:41:26.885030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:24.239 [2024-11-20 13:41:26.885225] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:24.239 [2024-11-20 13:41:26.885240] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:24.239 [2024-11-20 13:41:26.885410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:24.239 pt2 00:21:24.239 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.239 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:24.240 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:24.240 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:21:24.240 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:24.240 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:24.240 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:24.240 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:24.240 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:24.240 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:24.240 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:24.240 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:24.240 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:24.240 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.240 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.240 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.240 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.240 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.240 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:24.240 "name": "raid_bdev1", 00:21:24.240 "uuid": "f1bf6aaf-b6fd-4dcd-8ee1-0a3c0aae7e95", 00:21:24.240 "strip_size_kb": 64, 00:21:24.240 "state": "online", 00:21:24.240 "raid_level": "concat", 00:21:24.240 "superblock": true, 00:21:24.240 "num_base_bdevs": 2, 00:21:24.240 "num_base_bdevs_discovered": 2, 00:21:24.240 "num_base_bdevs_operational": 2, 00:21:24.240 "base_bdevs_list": [ 00:21:24.240 { 00:21:24.240 "name": "pt1", 00:21:24.240 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:24.240 "is_configured": true, 00:21:24.240 "data_offset": 2048, 00:21:24.240 "data_size": 63488 00:21:24.240 }, 00:21:24.240 { 00:21:24.240 "name": "pt2", 00:21:24.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:24.240 "is_configured": true, 00:21:24.240 "data_offset": 2048, 00:21:24.240 "data_size": 63488 00:21:24.240 } 00:21:24.240 ] 00:21:24.240 }' 00:21:24.240 13:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:24.240 13:41:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.806 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:24.806 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:24.806 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:24.806 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:24.806 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:24.806 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:24.806 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:24.806 13:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.806 13:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.806 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:24.806 [2024-11-20 13:41:27.440098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:24.806 13:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.806 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:24.806 "name": "raid_bdev1", 00:21:24.806 "aliases": [ 00:21:24.806 "f1bf6aaf-b6fd-4dcd-8ee1-0a3c0aae7e95" 00:21:24.806 ], 00:21:24.806 "product_name": "Raid Volume", 00:21:24.806 "block_size": 512, 00:21:24.806 "num_blocks": 126976, 00:21:24.806 "uuid": "f1bf6aaf-b6fd-4dcd-8ee1-0a3c0aae7e95", 00:21:24.806 "assigned_rate_limits": { 00:21:24.806 "rw_ios_per_sec": 0, 00:21:24.806 "rw_mbytes_per_sec": 0, 00:21:24.806 "r_mbytes_per_sec": 0, 00:21:24.806 "w_mbytes_per_sec": 0 00:21:24.806 }, 00:21:24.806 "claimed": false, 00:21:24.806 "zoned": false, 00:21:24.806 "supported_io_types": { 00:21:24.806 "read": true, 00:21:24.806 "write": true, 00:21:24.806 "unmap": true, 00:21:24.806 "flush": true, 00:21:24.806 "reset": true, 00:21:24.806 "nvme_admin": false, 00:21:24.806 "nvme_io": false, 00:21:24.806 "nvme_io_md": false, 00:21:24.806 "write_zeroes": true, 00:21:24.806 "zcopy": false, 00:21:24.806 "get_zone_info": false, 00:21:24.806 "zone_management": false, 00:21:24.806 "zone_append": false, 00:21:24.806 "compare": false, 00:21:24.806 "compare_and_write": false, 00:21:24.806 "abort": false, 00:21:24.806 "seek_hole": false, 00:21:24.806 "seek_data": false, 00:21:24.806 "copy": false, 00:21:24.806 "nvme_iov_md": false 00:21:24.806 }, 00:21:24.806 "memory_domains": [ 00:21:24.806 { 00:21:24.806 "dma_device_id": "system", 00:21:24.806 "dma_device_type": 1 00:21:24.806 }, 00:21:24.806 { 00:21:24.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.806 "dma_device_type": 2 00:21:24.806 }, 00:21:24.806 { 00:21:24.806 "dma_device_id": "system", 00:21:24.806 "dma_device_type": 1 00:21:24.806 }, 00:21:24.806 { 00:21:24.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.806 "dma_device_type": 2 00:21:24.806 } 00:21:24.806 ], 00:21:24.806 "driver_specific": { 00:21:24.806 "raid": { 00:21:24.806 "uuid": "f1bf6aaf-b6fd-4dcd-8ee1-0a3c0aae7e95", 00:21:24.806 "strip_size_kb": 64, 00:21:24.806 "state": "online", 00:21:24.806 "raid_level": "concat", 00:21:24.806 "superblock": true, 00:21:24.806 "num_base_bdevs": 2, 00:21:24.806 "num_base_bdevs_discovered": 2, 00:21:24.806 "num_base_bdevs_operational": 2, 00:21:24.806 "base_bdevs_list": [ 00:21:24.806 { 00:21:24.806 "name": "pt1", 00:21:24.806 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:24.806 "is_configured": true, 00:21:24.806 "data_offset": 2048, 00:21:24.806 "data_size": 63488 00:21:24.806 }, 00:21:24.806 { 00:21:24.806 "name": "pt2", 00:21:24.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:24.806 "is_configured": true, 00:21:24.806 "data_offset": 2048, 00:21:24.806 "data_size": 63488 00:21:24.806 } 00:21:24.806 ] 00:21:24.806 } 00:21:24.807 } 00:21:24.807 }' 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:24.807 pt2' 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.807 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:24.807 [2024-11-20 13:41:27.716136] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:25.065 13:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.065 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f1bf6aaf-b6fd-4dcd-8ee1-0a3c0aae7e95 '!=' f1bf6aaf-b6fd-4dcd-8ee1-0a3c0aae7e95 ']' 00:21:25.065 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:21:25.065 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:25.065 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:25.065 13:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62338 00:21:25.065 13:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62338 ']' 00:21:25.065 13:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62338 00:21:25.065 13:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:21:25.065 13:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.065 13:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62338 00:21:25.065 killing process with pid 62338 00:21:25.065 13:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:25.065 13:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:25.065 13:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62338' 00:21:25.065 13:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62338 00:21:25.065 [2024-11-20 13:41:27.797923] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:25.065 13:41:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62338 00:21:25.065 [2024-11-20 13:41:27.798035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:25.065 [2024-11-20 13:41:27.798100] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:25.065 [2024-11-20 13:41:27.798124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:25.323 [2024-11-20 13:41:27.990609] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:26.259 ************************************ 00:21:26.259 END TEST raid_superblock_test 00:21:26.259 ************************************ 00:21:26.259 13:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:21:26.259 00:21:26.259 real 0m4.939s 00:21:26.259 user 0m7.269s 00:21:26.259 sys 0m0.700s 00:21:26.259 13:41:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:26.259 13:41:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.259 13:41:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:21:26.259 13:41:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:26.259 13:41:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:26.259 13:41:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:26.259 ************************************ 00:21:26.259 START TEST raid_read_error_test 00:21:26.259 ************************************ 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hlQz18JjAi 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62555 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62555 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62555 ']' 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:26.259 13:41:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.519 [2024-11-20 13:41:29.246626] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:21:26.519 [2024-11-20 13:41:29.247134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62555 ] 00:21:26.787 [2024-11-20 13:41:29.436190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.787 [2024-11-20 13:41:29.572282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.065 [2024-11-20 13:41:29.783976] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:27.065 [2024-11-20 13:41:29.784035] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:27.323 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.323 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:21:27.323 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:27.323 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:27.323 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.323 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.582 BaseBdev1_malloc 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.582 true 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.582 [2024-11-20 13:41:30.268621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:27.582 [2024-11-20 13:41:30.268703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:27.582 [2024-11-20 13:41:30.268747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:27.582 [2024-11-20 13:41:30.268781] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:27.582 [2024-11-20 13:41:30.271883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:27.582 [2024-11-20 13:41:30.271963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:27.582 BaseBdev1 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.582 BaseBdev2_malloc 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.582 true 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.582 [2024-11-20 13:41:30.326915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:27.582 [2024-11-20 13:41:30.327026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:27.582 [2024-11-20 13:41:30.327053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:27.582 [2024-11-20 13:41:30.327071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:27.582 [2024-11-20 13:41:30.329903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:27.582 [2024-11-20 13:41:30.329961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:27.582 BaseBdev2 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.582 [2024-11-20 13:41:30.335035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:27.582 [2024-11-20 13:41:30.337573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:27.582 [2024-11-20 13:41:30.338029] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:27.582 [2024-11-20 13:41:30.338060] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:27.582 [2024-11-20 13:41:30.338396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:27.582 [2024-11-20 13:41:30.338596] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:27.582 [2024-11-20 13:41:30.338616] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:27.582 [2024-11-20 13:41:30.338813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.582 "name": "raid_bdev1", 00:21:27.582 "uuid": "3f53d563-db96-42c2-8af6-986b6077fb46", 00:21:27.582 "strip_size_kb": 64, 00:21:27.582 "state": "online", 00:21:27.582 "raid_level": "concat", 00:21:27.582 "superblock": true, 00:21:27.582 "num_base_bdevs": 2, 00:21:27.582 "num_base_bdevs_discovered": 2, 00:21:27.582 "num_base_bdevs_operational": 2, 00:21:27.582 "base_bdevs_list": [ 00:21:27.582 { 00:21:27.582 "name": "BaseBdev1", 00:21:27.582 "uuid": "50b65f04-d5d4-5ac2-8bef-7d16a1c8f560", 00:21:27.582 "is_configured": true, 00:21:27.582 "data_offset": 2048, 00:21:27.582 "data_size": 63488 00:21:27.582 }, 00:21:27.582 { 00:21:27.582 "name": "BaseBdev2", 00:21:27.582 "uuid": "30c4c999-5da6-5e40-ac88-bf1470bd5fb8", 00:21:27.582 "is_configured": true, 00:21:27.582 "data_offset": 2048, 00:21:27.582 "data_size": 63488 00:21:27.582 } 00:21:27.582 ] 00:21:27.582 }' 00:21:27.582 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.583 13:41:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.150 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:28.150 13:41:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:28.150 [2024-11-20 13:41:30.956649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.085 "name": "raid_bdev1", 00:21:29.085 "uuid": "3f53d563-db96-42c2-8af6-986b6077fb46", 00:21:29.085 "strip_size_kb": 64, 00:21:29.085 "state": "online", 00:21:29.085 "raid_level": "concat", 00:21:29.085 "superblock": true, 00:21:29.085 "num_base_bdevs": 2, 00:21:29.085 "num_base_bdevs_discovered": 2, 00:21:29.085 "num_base_bdevs_operational": 2, 00:21:29.085 "base_bdevs_list": [ 00:21:29.085 { 00:21:29.085 "name": "BaseBdev1", 00:21:29.085 "uuid": "50b65f04-d5d4-5ac2-8bef-7d16a1c8f560", 00:21:29.085 "is_configured": true, 00:21:29.085 "data_offset": 2048, 00:21:29.085 "data_size": 63488 00:21:29.085 }, 00:21:29.085 { 00:21:29.085 "name": "BaseBdev2", 00:21:29.085 "uuid": "30c4c999-5da6-5e40-ac88-bf1470bd5fb8", 00:21:29.085 "is_configured": true, 00:21:29.085 "data_offset": 2048, 00:21:29.085 "data_size": 63488 00:21:29.085 } 00:21:29.085 ] 00:21:29.085 }' 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.085 13:41:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.653 13:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:29.653 13:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.653 13:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.653 [2024-11-20 13:41:32.395432] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:29.653 [2024-11-20 13:41:32.395652] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:29.653 { 00:21:29.653 "results": [ 00:21:29.653 { 00:21:29.653 "job": "raid_bdev1", 00:21:29.653 "core_mask": "0x1", 00:21:29.653 "workload": "randrw", 00:21:29.653 "percentage": 50, 00:21:29.653 "status": "finished", 00:21:29.653 "queue_depth": 1, 00:21:29.653 "io_size": 131072, 00:21:29.653 "runtime": 1.436571, 00:21:29.653 "iops": 9993.240849216641, 00:21:29.653 "mibps": 1249.1551061520802, 00:21:29.653 "io_failed": 1, 00:21:29.653 "io_timeout": 0, 00:21:29.653 "avg_latency_us": 139.1789330513465, 00:21:29.653 "min_latency_us": 39.09818181818182, 00:21:29.653 "max_latency_us": 2457.6 00:21:29.653 } 00:21:29.653 ], 00:21:29.653 "core_count": 1 00:21:29.653 } 00:21:29.653 [2024-11-20 13:41:32.399352] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:29.653 [2024-11-20 13:41:32.399479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.653 [2024-11-20 13:41:32.399526] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:29.653 [2024-11-20 13:41:32.399548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:29.653 13:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.653 13:41:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62555 00:21:29.653 13:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62555 ']' 00:21:29.653 13:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62555 00:21:29.653 13:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:21:29.653 13:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.653 13:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62555 00:21:29.653 killing process with pid 62555 00:21:29.653 13:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:29.653 13:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:29.653 13:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62555' 00:21:29.653 13:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62555 00:21:29.653 [2024-11-20 13:41:32.440725] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:29.653 13:41:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62555 00:21:29.912 [2024-11-20 13:41:32.567213] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:30.847 13:41:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hlQz18JjAi 00:21:30.847 13:41:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:30.847 13:41:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:30.847 13:41:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:21:30.847 13:41:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:21:30.847 13:41:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:30.847 13:41:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:30.847 13:41:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:21:30.847 00:21:30.848 real 0m4.608s 00:21:30.848 user 0m5.707s 00:21:30.848 sys 0m0.600s 00:21:30.848 13:41:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:30.848 13:41:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.848 ************************************ 00:21:30.848 END TEST raid_read_error_test 00:21:30.848 ************************************ 00:21:31.121 13:41:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:21:31.121 13:41:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:31.121 13:41:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:31.121 13:41:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:31.121 ************************************ 00:21:31.121 START TEST raid_write_error_test 00:21:31.121 ************************************ 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MwL4mDQg0A 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62695 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62695 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62695 ']' 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:31.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.121 13:41:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.121 [2024-11-20 13:41:33.888367] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:21:31.121 [2024-11-20 13:41:33.888556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62695 ] 00:21:31.423 [2024-11-20 13:41:34.066045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.423 [2024-11-20 13:41:34.202494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.680 [2024-11-20 13:41:34.415011] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:31.680 [2024-11-20 13:41:34.415311] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.247 BaseBdev1_malloc 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.247 true 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.247 [2024-11-20 13:41:34.922336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:32.247 [2024-11-20 13:41:34.922434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:32.247 [2024-11-20 13:41:34.922464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:32.247 [2024-11-20 13:41:34.922481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:32.247 [2024-11-20 13:41:34.925440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:32.247 [2024-11-20 13:41:34.925522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:32.247 BaseBdev1 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.247 BaseBdev2_malloc 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.247 true 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.247 [2024-11-20 13:41:34.983799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:32.247 [2024-11-20 13:41:34.983899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:32.247 [2024-11-20 13:41:34.983959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:32.247 [2024-11-20 13:41:34.983987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:32.247 [2024-11-20 13:41:34.986815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:32.247 [2024-11-20 13:41:34.986877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:32.247 BaseBdev2 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.247 13:41:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.247 [2024-11-20 13:41:34.995896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:32.247 [2024-11-20 13:41:34.998603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:32.247 [2024-11-20 13:41:34.999072] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:32.247 [2024-11-20 13:41:34.999230] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:32.247 [2024-11-20 13:41:34.999608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:32.247 [2024-11-20 13:41:34.999999] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:32.247 [2024-11-20 13:41:35.000145] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:32.247 [2024-11-20 13:41:35.000563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:32.247 13:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.247 13:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:21:32.247 13:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:32.247 13:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:32.247 13:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:32.247 13:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:32.247 13:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:32.247 13:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.247 13:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.247 13:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.247 13:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.247 13:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.247 13:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.247 13:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.247 13:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.247 13:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.247 13:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.247 "name": "raid_bdev1", 00:21:32.247 "uuid": "28d9fc60-7568-48fd-bdb6-4f0980417650", 00:21:32.247 "strip_size_kb": 64, 00:21:32.247 "state": "online", 00:21:32.247 "raid_level": "concat", 00:21:32.247 "superblock": true, 00:21:32.247 "num_base_bdevs": 2, 00:21:32.247 "num_base_bdevs_discovered": 2, 00:21:32.247 "num_base_bdevs_operational": 2, 00:21:32.247 "base_bdevs_list": [ 00:21:32.247 { 00:21:32.247 "name": "BaseBdev1", 00:21:32.247 "uuid": "f34c9731-b067-59ea-a531-f5c3c28334d5", 00:21:32.247 "is_configured": true, 00:21:32.247 "data_offset": 2048, 00:21:32.247 "data_size": 63488 00:21:32.247 }, 00:21:32.247 { 00:21:32.247 "name": "BaseBdev2", 00:21:32.247 "uuid": "df44d51b-5713-543e-bf07-171d0fd88c50", 00:21:32.247 "is_configured": true, 00:21:32.247 "data_offset": 2048, 00:21:32.247 "data_size": 63488 00:21:32.247 } 00:21:32.247 ] 00:21:32.247 }' 00:21:32.247 13:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.247 13:41:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.815 13:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:32.815 13:41:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:32.815 [2024-11-20 13:41:35.646201] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:33.749 "name": "raid_bdev1", 00:21:33.749 "uuid": "28d9fc60-7568-48fd-bdb6-4f0980417650", 00:21:33.749 "strip_size_kb": 64, 00:21:33.749 "state": "online", 00:21:33.749 "raid_level": "concat", 00:21:33.749 "superblock": true, 00:21:33.749 "num_base_bdevs": 2, 00:21:33.749 "num_base_bdevs_discovered": 2, 00:21:33.749 "num_base_bdevs_operational": 2, 00:21:33.749 "base_bdevs_list": [ 00:21:33.749 { 00:21:33.749 "name": "BaseBdev1", 00:21:33.749 "uuid": "f34c9731-b067-59ea-a531-f5c3c28334d5", 00:21:33.749 "is_configured": true, 00:21:33.749 "data_offset": 2048, 00:21:33.749 "data_size": 63488 00:21:33.749 }, 00:21:33.749 { 00:21:33.749 "name": "BaseBdev2", 00:21:33.749 "uuid": "df44d51b-5713-543e-bf07-171d0fd88c50", 00:21:33.749 "is_configured": true, 00:21:33.749 "data_offset": 2048, 00:21:33.749 "data_size": 63488 00:21:33.749 } 00:21:33.749 ] 00:21:33.749 }' 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:33.749 13:41:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.315 13:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:34.315 13:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.315 13:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.315 [2024-11-20 13:41:37.068926] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:34.315 [2024-11-20 13:41:37.068995] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:34.315 [2024-11-20 13:41:37.073527] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:34.315 [2024-11-20 13:41:37.073736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:34.315 [2024-11-20 13:41:37.073857] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:34.315 [2024-11-20 13:41:37.073932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:34.315 { 00:21:34.315 "results": [ 00:21:34.315 { 00:21:34.315 "job": "raid_bdev1", 00:21:34.315 "core_mask": "0x1", 00:21:34.315 "workload": "randrw", 00:21:34.315 "percentage": 50, 00:21:34.315 "status": "finished", 00:21:34.315 "queue_depth": 1, 00:21:34.315 "io_size": 131072, 00:21:34.315 "runtime": 1.420625, 00:21:34.315 "iops": 9975.890893092828, 00:21:34.315 "mibps": 1246.9863616366035, 00:21:34.315 "io_failed": 1, 00:21:34.315 "io_timeout": 0, 00:21:34.315 "avg_latency_us": 139.51055002148772, 00:21:34.315 "min_latency_us": 39.33090909090909, 00:21:34.315 "max_latency_us": 1809.6872727272728 00:21:34.315 } 00:21:34.315 ], 00:21:34.315 "core_count": 1 00:21:34.315 } 00:21:34.315 13:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.315 13:41:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62695 00:21:34.315 13:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62695 ']' 00:21:34.315 13:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62695 00:21:34.315 13:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:21:34.315 13:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.315 13:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62695 00:21:34.315 killing process with pid 62695 00:21:34.315 13:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:34.315 13:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:34.315 13:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62695' 00:21:34.315 13:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62695 00:21:34.315 13:41:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62695 00:21:34.315 [2024-11-20 13:41:37.113575] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:34.573 [2024-11-20 13:41:37.266480] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:35.964 13:41:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MwL4mDQg0A 00:21:35.964 13:41:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:35.964 13:41:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:35.964 ************************************ 00:21:35.964 END TEST raid_write_error_test 00:21:35.964 ************************************ 00:21:35.964 13:41:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:21:35.964 13:41:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:21:35.964 13:41:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:35.964 13:41:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:21:35.964 13:41:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:21:35.964 00:21:35.964 real 0m4.755s 00:21:35.964 user 0m5.894s 00:21:35.964 sys 0m0.584s 00:21:35.964 13:41:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:35.964 13:41:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.964 13:41:38 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:21:35.964 13:41:38 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:21:35.964 13:41:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:35.964 13:41:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:35.964 13:41:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:35.964 ************************************ 00:21:35.964 START TEST raid_state_function_test 00:21:35.964 ************************************ 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62844 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62844' 00:21:35.964 Process raid pid: 62844 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62844 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62844 ']' 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.964 13:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:35.964 [2024-11-20 13:41:38.703983] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:21:35.964 [2024-11-20 13:41:38.704209] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.222 [2024-11-20 13:41:38.890045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.222 [2024-11-20 13:41:39.031846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.482 [2024-11-20 13:41:39.253909] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:36.482 [2024-11-20 13:41:39.253977] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:37.049 13:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.049 13:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:21:37.049 13:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:37.049 13:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.049 13:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.049 [2024-11-20 13:41:39.764870] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:37.049 [2024-11-20 13:41:39.764948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:37.049 [2024-11-20 13:41:39.764967] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:37.049 [2024-11-20 13:41:39.764984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:37.049 13:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.049 13:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:37.049 13:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:37.049 13:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:37.049 13:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:37.049 13:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:37.049 13:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:37.049 13:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.049 13:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.050 13:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.050 13:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.050 13:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.050 13:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.050 13:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.050 13:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.050 13:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.050 13:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.050 "name": "Existed_Raid", 00:21:37.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.050 "strip_size_kb": 0, 00:21:37.050 "state": "configuring", 00:21:37.050 "raid_level": "raid1", 00:21:37.050 "superblock": false, 00:21:37.050 "num_base_bdevs": 2, 00:21:37.050 "num_base_bdevs_discovered": 0, 00:21:37.050 "num_base_bdevs_operational": 2, 00:21:37.050 "base_bdevs_list": [ 00:21:37.050 { 00:21:37.050 "name": "BaseBdev1", 00:21:37.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.050 "is_configured": false, 00:21:37.050 "data_offset": 0, 00:21:37.050 "data_size": 0 00:21:37.050 }, 00:21:37.050 { 00:21:37.050 "name": "BaseBdev2", 00:21:37.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.050 "is_configured": false, 00:21:37.050 "data_offset": 0, 00:21:37.050 "data_size": 0 00:21:37.050 } 00:21:37.050 ] 00:21:37.050 }' 00:21:37.050 13:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.050 13:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.618 [2024-11-20 13:41:40.277015] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:37.618 [2024-11-20 13:41:40.277063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.618 [2024-11-20 13:41:40.284989] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:37.618 [2024-11-20 13:41:40.285050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:37.618 [2024-11-20 13:41:40.285068] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:37.618 [2024-11-20 13:41:40.285087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.618 [2024-11-20 13:41:40.337755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:37.618 BaseBdev1 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.618 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.618 [ 00:21:37.618 { 00:21:37.618 "name": "BaseBdev1", 00:21:37.618 "aliases": [ 00:21:37.618 "d60de503-df47-419f-acd5-dd00c76263c2" 00:21:37.618 ], 00:21:37.618 "product_name": "Malloc disk", 00:21:37.618 "block_size": 512, 00:21:37.618 "num_blocks": 65536, 00:21:37.618 "uuid": "d60de503-df47-419f-acd5-dd00c76263c2", 00:21:37.618 "assigned_rate_limits": { 00:21:37.618 "rw_ios_per_sec": 0, 00:21:37.618 "rw_mbytes_per_sec": 0, 00:21:37.619 "r_mbytes_per_sec": 0, 00:21:37.619 "w_mbytes_per_sec": 0 00:21:37.619 }, 00:21:37.619 "claimed": true, 00:21:37.619 "claim_type": "exclusive_write", 00:21:37.619 "zoned": false, 00:21:37.619 "supported_io_types": { 00:21:37.619 "read": true, 00:21:37.619 "write": true, 00:21:37.619 "unmap": true, 00:21:37.619 "flush": true, 00:21:37.619 "reset": true, 00:21:37.619 "nvme_admin": false, 00:21:37.619 "nvme_io": false, 00:21:37.619 "nvme_io_md": false, 00:21:37.619 "write_zeroes": true, 00:21:37.619 "zcopy": true, 00:21:37.619 "get_zone_info": false, 00:21:37.619 "zone_management": false, 00:21:37.619 "zone_append": false, 00:21:37.619 "compare": false, 00:21:37.619 "compare_and_write": false, 00:21:37.619 "abort": true, 00:21:37.619 "seek_hole": false, 00:21:37.619 "seek_data": false, 00:21:37.619 "copy": true, 00:21:37.619 "nvme_iov_md": false 00:21:37.619 }, 00:21:37.619 "memory_domains": [ 00:21:37.619 { 00:21:37.619 "dma_device_id": "system", 00:21:37.619 "dma_device_type": 1 00:21:37.619 }, 00:21:37.619 { 00:21:37.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.619 "dma_device_type": 2 00:21:37.619 } 00:21:37.619 ], 00:21:37.619 "driver_specific": {} 00:21:37.619 } 00:21:37.619 ] 00:21:37.619 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.619 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:37.619 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:37.619 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:37.619 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:37.619 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:37.619 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:37.619 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:37.619 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.619 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.619 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.619 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.619 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.619 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.619 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.619 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.619 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.619 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.619 "name": "Existed_Raid", 00:21:37.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.619 "strip_size_kb": 0, 00:21:37.619 "state": "configuring", 00:21:37.619 "raid_level": "raid1", 00:21:37.619 "superblock": false, 00:21:37.619 "num_base_bdevs": 2, 00:21:37.619 "num_base_bdevs_discovered": 1, 00:21:37.619 "num_base_bdevs_operational": 2, 00:21:37.619 "base_bdevs_list": [ 00:21:37.619 { 00:21:37.619 "name": "BaseBdev1", 00:21:37.619 "uuid": "d60de503-df47-419f-acd5-dd00c76263c2", 00:21:37.619 "is_configured": true, 00:21:37.619 "data_offset": 0, 00:21:37.619 "data_size": 65536 00:21:37.619 }, 00:21:37.619 { 00:21:37.619 "name": "BaseBdev2", 00:21:37.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.619 "is_configured": false, 00:21:37.619 "data_offset": 0, 00:21:37.619 "data_size": 0 00:21:37.619 } 00:21:37.619 ] 00:21:37.619 }' 00:21:37.619 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.619 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.186 [2024-11-20 13:41:40.889884] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:38.186 [2024-11-20 13:41:40.890108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.186 [2024-11-20 13:41:40.897928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:38.186 [2024-11-20 13:41:40.900391] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:38.186 [2024-11-20 13:41:40.900449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.186 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.186 "name": "Existed_Raid", 00:21:38.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.186 "strip_size_kb": 0, 00:21:38.186 "state": "configuring", 00:21:38.186 "raid_level": "raid1", 00:21:38.186 "superblock": false, 00:21:38.186 "num_base_bdevs": 2, 00:21:38.186 "num_base_bdevs_discovered": 1, 00:21:38.186 "num_base_bdevs_operational": 2, 00:21:38.186 "base_bdevs_list": [ 00:21:38.186 { 00:21:38.186 "name": "BaseBdev1", 00:21:38.187 "uuid": "d60de503-df47-419f-acd5-dd00c76263c2", 00:21:38.187 "is_configured": true, 00:21:38.187 "data_offset": 0, 00:21:38.187 "data_size": 65536 00:21:38.187 }, 00:21:38.187 { 00:21:38.187 "name": "BaseBdev2", 00:21:38.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.187 "is_configured": false, 00:21:38.187 "data_offset": 0, 00:21:38.187 "data_size": 0 00:21:38.187 } 00:21:38.187 ] 00:21:38.187 }' 00:21:38.187 13:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.187 13:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.755 13:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:38.755 13:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.755 13:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.755 [2024-11-20 13:41:41.461288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:38.755 [2024-11-20 13:41:41.461355] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:38.755 [2024-11-20 13:41:41.461369] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:38.755 [2024-11-20 13:41:41.461706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:38.755 [2024-11-20 13:41:41.461971] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:38.755 [2024-11-20 13:41:41.461994] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:38.755 [2024-11-20 13:41:41.462320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:38.755 BaseBdev2 00:21:38.755 13:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.755 13:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:38.755 13:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:38.755 13:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:38.755 13:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:21:38.755 13:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:38.755 13:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:38.755 13:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:38.755 13:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.755 13:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.755 13:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.755 13:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:38.755 13:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.756 13:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.756 [ 00:21:38.756 { 00:21:38.756 "name": "BaseBdev2", 00:21:38.756 "aliases": [ 00:21:38.756 "88b0b91f-60d0-4090-98c7-0c30ca4bd32a" 00:21:38.756 ], 00:21:38.756 "product_name": "Malloc disk", 00:21:38.756 "block_size": 512, 00:21:38.756 "num_blocks": 65536, 00:21:38.756 "uuid": "88b0b91f-60d0-4090-98c7-0c30ca4bd32a", 00:21:38.756 "assigned_rate_limits": { 00:21:38.756 "rw_ios_per_sec": 0, 00:21:38.756 "rw_mbytes_per_sec": 0, 00:21:38.756 "r_mbytes_per_sec": 0, 00:21:38.756 "w_mbytes_per_sec": 0 00:21:38.756 }, 00:21:38.756 "claimed": true, 00:21:38.756 "claim_type": "exclusive_write", 00:21:38.756 "zoned": false, 00:21:38.756 "supported_io_types": { 00:21:38.756 "read": true, 00:21:38.756 "write": true, 00:21:38.756 "unmap": true, 00:21:38.756 "flush": true, 00:21:38.756 "reset": true, 00:21:38.756 "nvme_admin": false, 00:21:38.756 "nvme_io": false, 00:21:38.756 "nvme_io_md": false, 00:21:38.756 "write_zeroes": true, 00:21:38.756 "zcopy": true, 00:21:38.756 "get_zone_info": false, 00:21:38.756 "zone_management": false, 00:21:38.756 "zone_append": false, 00:21:38.756 "compare": false, 00:21:38.756 "compare_and_write": false, 00:21:38.756 "abort": true, 00:21:38.756 "seek_hole": false, 00:21:38.756 "seek_data": false, 00:21:38.756 "copy": true, 00:21:38.756 "nvme_iov_md": false 00:21:38.756 }, 00:21:38.756 "memory_domains": [ 00:21:38.756 { 00:21:38.756 "dma_device_id": "system", 00:21:38.756 "dma_device_type": 1 00:21:38.756 }, 00:21:38.756 { 00:21:38.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.756 "dma_device_type": 2 00:21:38.756 } 00:21:38.756 ], 00:21:38.756 "driver_specific": {} 00:21:38.756 } 00:21:38.756 ] 00:21:38.756 13:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.756 13:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:21:38.756 13:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:38.756 13:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:38.756 13:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:38.756 13:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:38.756 13:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:38.756 13:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:38.756 13:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:38.756 13:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:38.756 13:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.756 13:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.756 13:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.756 13:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.756 13:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.756 13:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.756 13:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.756 13:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.756 13:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.756 13:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.756 "name": "Existed_Raid", 00:21:38.756 "uuid": "f022ef11-f109-4a08-a1ff-03b7c682b9d3", 00:21:38.756 "strip_size_kb": 0, 00:21:38.756 "state": "online", 00:21:38.756 "raid_level": "raid1", 00:21:38.756 "superblock": false, 00:21:38.756 "num_base_bdevs": 2, 00:21:38.756 "num_base_bdevs_discovered": 2, 00:21:38.756 "num_base_bdevs_operational": 2, 00:21:38.756 "base_bdevs_list": [ 00:21:38.756 { 00:21:38.756 "name": "BaseBdev1", 00:21:38.756 "uuid": "d60de503-df47-419f-acd5-dd00c76263c2", 00:21:38.756 "is_configured": true, 00:21:38.756 "data_offset": 0, 00:21:38.756 "data_size": 65536 00:21:38.756 }, 00:21:38.756 { 00:21:38.756 "name": "BaseBdev2", 00:21:38.756 "uuid": "88b0b91f-60d0-4090-98c7-0c30ca4bd32a", 00:21:38.756 "is_configured": true, 00:21:38.756 "data_offset": 0, 00:21:38.756 "data_size": 65536 00:21:38.756 } 00:21:38.756 ] 00:21:38.756 }' 00:21:38.756 13:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.756 13:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.322 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:39.322 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:39.322 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:39.322 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:39.322 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:39.322 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:39.322 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:39.322 13:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.322 13:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.322 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:39.322 [2024-11-20 13:41:42.049868] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:39.322 13:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.322 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:39.322 "name": "Existed_Raid", 00:21:39.322 "aliases": [ 00:21:39.322 "f022ef11-f109-4a08-a1ff-03b7c682b9d3" 00:21:39.322 ], 00:21:39.322 "product_name": "Raid Volume", 00:21:39.322 "block_size": 512, 00:21:39.322 "num_blocks": 65536, 00:21:39.322 "uuid": "f022ef11-f109-4a08-a1ff-03b7c682b9d3", 00:21:39.322 "assigned_rate_limits": { 00:21:39.322 "rw_ios_per_sec": 0, 00:21:39.322 "rw_mbytes_per_sec": 0, 00:21:39.322 "r_mbytes_per_sec": 0, 00:21:39.322 "w_mbytes_per_sec": 0 00:21:39.322 }, 00:21:39.322 "claimed": false, 00:21:39.322 "zoned": false, 00:21:39.322 "supported_io_types": { 00:21:39.322 "read": true, 00:21:39.322 "write": true, 00:21:39.322 "unmap": false, 00:21:39.322 "flush": false, 00:21:39.322 "reset": true, 00:21:39.322 "nvme_admin": false, 00:21:39.322 "nvme_io": false, 00:21:39.322 "nvme_io_md": false, 00:21:39.322 "write_zeroes": true, 00:21:39.322 "zcopy": false, 00:21:39.322 "get_zone_info": false, 00:21:39.322 "zone_management": false, 00:21:39.322 "zone_append": false, 00:21:39.322 "compare": false, 00:21:39.322 "compare_and_write": false, 00:21:39.322 "abort": false, 00:21:39.322 "seek_hole": false, 00:21:39.322 "seek_data": false, 00:21:39.322 "copy": false, 00:21:39.322 "nvme_iov_md": false 00:21:39.322 }, 00:21:39.322 "memory_domains": [ 00:21:39.322 { 00:21:39.322 "dma_device_id": "system", 00:21:39.322 "dma_device_type": 1 00:21:39.322 }, 00:21:39.322 { 00:21:39.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.322 "dma_device_type": 2 00:21:39.322 }, 00:21:39.322 { 00:21:39.322 "dma_device_id": "system", 00:21:39.322 "dma_device_type": 1 00:21:39.322 }, 00:21:39.322 { 00:21:39.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.322 "dma_device_type": 2 00:21:39.322 } 00:21:39.322 ], 00:21:39.322 "driver_specific": { 00:21:39.322 "raid": { 00:21:39.322 "uuid": "f022ef11-f109-4a08-a1ff-03b7c682b9d3", 00:21:39.322 "strip_size_kb": 0, 00:21:39.322 "state": "online", 00:21:39.322 "raid_level": "raid1", 00:21:39.322 "superblock": false, 00:21:39.322 "num_base_bdevs": 2, 00:21:39.322 "num_base_bdevs_discovered": 2, 00:21:39.322 "num_base_bdevs_operational": 2, 00:21:39.322 "base_bdevs_list": [ 00:21:39.322 { 00:21:39.322 "name": "BaseBdev1", 00:21:39.322 "uuid": "d60de503-df47-419f-acd5-dd00c76263c2", 00:21:39.322 "is_configured": true, 00:21:39.322 "data_offset": 0, 00:21:39.322 "data_size": 65536 00:21:39.322 }, 00:21:39.322 { 00:21:39.322 "name": "BaseBdev2", 00:21:39.322 "uuid": "88b0b91f-60d0-4090-98c7-0c30ca4bd32a", 00:21:39.322 "is_configured": true, 00:21:39.322 "data_offset": 0, 00:21:39.322 "data_size": 65536 00:21:39.322 } 00:21:39.322 ] 00:21:39.322 } 00:21:39.322 } 00:21:39.322 }' 00:21:39.322 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:39.322 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:39.322 BaseBdev2' 00:21:39.322 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:39.322 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:39.322 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:39.323 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:39.323 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:39.323 13:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.323 13:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.323 13:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.581 [2024-11-20 13:41:42.317660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.581 "name": "Existed_Raid", 00:21:39.581 "uuid": "f022ef11-f109-4a08-a1ff-03b7c682b9d3", 00:21:39.581 "strip_size_kb": 0, 00:21:39.581 "state": "online", 00:21:39.581 "raid_level": "raid1", 00:21:39.581 "superblock": false, 00:21:39.581 "num_base_bdevs": 2, 00:21:39.581 "num_base_bdevs_discovered": 1, 00:21:39.581 "num_base_bdevs_operational": 1, 00:21:39.581 "base_bdevs_list": [ 00:21:39.581 { 00:21:39.581 "name": null, 00:21:39.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.581 "is_configured": false, 00:21:39.581 "data_offset": 0, 00:21:39.581 "data_size": 65536 00:21:39.581 }, 00:21:39.581 { 00:21:39.581 "name": "BaseBdev2", 00:21:39.581 "uuid": "88b0b91f-60d0-4090-98c7-0c30ca4bd32a", 00:21:39.581 "is_configured": true, 00:21:39.581 "data_offset": 0, 00:21:39.581 "data_size": 65536 00:21:39.581 } 00:21:39.581 ] 00:21:39.581 }' 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.581 13:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.147 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:40.147 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:40.147 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.147 13:41:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:40.147 13:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.147 13:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.147 13:41:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.147 13:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:40.147 13:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:40.147 13:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:40.147 13:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.147 13:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.147 [2024-11-20 13:41:43.008875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:40.147 [2024-11-20 13:41:43.009021] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:40.406 [2024-11-20 13:41:43.096971] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:40.406 [2024-11-20 13:41:43.097049] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:40.406 [2024-11-20 13:41:43.097072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:40.406 13:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.406 13:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:40.406 13:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:40.406 13:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.406 13:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.406 13:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.406 13:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:40.406 13:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.406 13:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:40.406 13:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:40.406 13:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:40.406 13:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62844 00:21:40.406 13:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62844 ']' 00:21:40.406 13:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62844 00:21:40.406 13:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:21:40.406 13:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:40.406 13:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62844 00:21:40.406 killing process with pid 62844 00:21:40.406 13:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:40.406 13:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:40.406 13:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62844' 00:21:40.406 13:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62844 00:21:40.406 13:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62844 00:21:40.406 [2024-11-20 13:41:43.187937] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:40.406 [2024-11-20 13:41:43.202802] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:41.342 13:41:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:21:41.342 00:21:41.342 real 0m5.648s 00:21:41.342 user 0m8.561s 00:21:41.342 sys 0m0.806s 00:21:41.342 ************************************ 00:21:41.342 END TEST raid_state_function_test 00:21:41.342 ************************************ 00:21:41.342 13:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:41.342 13:41:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.602 13:41:44 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:21:41.602 13:41:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:41.602 13:41:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:41.602 13:41:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:41.602 ************************************ 00:21:41.602 START TEST raid_state_function_test_sb 00:21:41.602 ************************************ 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:41.602 Process raid pid: 63097 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63097 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63097' 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63097 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63097 ']' 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.602 13:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.602 [2024-11-20 13:41:44.404696] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:21:41.602 [2024-11-20 13:41:44.405154] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.861 [2024-11-20 13:41:44.595623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.861 [2024-11-20 13:41:44.759988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.120 [2024-11-20 13:41:44.995501] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:42.120 [2024-11-20 13:41:44.995796] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.690 [2024-11-20 13:41:45.463526] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:42.690 [2024-11-20 13:41:45.463639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:42.690 [2024-11-20 13:41:45.463657] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:42.690 [2024-11-20 13:41:45.463674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.690 "name": "Existed_Raid", 00:21:42.690 "uuid": "c2c50025-3483-4fb2-b0d8-aef83378d9c6", 00:21:42.690 "strip_size_kb": 0, 00:21:42.690 "state": "configuring", 00:21:42.690 "raid_level": "raid1", 00:21:42.690 "superblock": true, 00:21:42.690 "num_base_bdevs": 2, 00:21:42.690 "num_base_bdevs_discovered": 0, 00:21:42.690 "num_base_bdevs_operational": 2, 00:21:42.690 "base_bdevs_list": [ 00:21:42.690 { 00:21:42.690 "name": "BaseBdev1", 00:21:42.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.690 "is_configured": false, 00:21:42.690 "data_offset": 0, 00:21:42.690 "data_size": 0 00:21:42.690 }, 00:21:42.690 { 00:21:42.690 "name": "BaseBdev2", 00:21:42.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.690 "is_configured": false, 00:21:42.690 "data_offset": 0, 00:21:42.690 "data_size": 0 00:21:42.690 } 00:21:42.690 ] 00:21:42.690 }' 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.690 13:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.261 13:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:43.261 13:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.261 13:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.261 [2024-11-20 13:41:45.975651] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:43.261 [2024-11-20 13:41:45.975857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:43.261 13:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.261 13:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:43.261 13:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.261 13:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.261 [2024-11-20 13:41:45.983629] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:43.261 [2024-11-20 13:41:45.983691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:43.261 [2024-11-20 13:41:45.983706] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:43.261 [2024-11-20 13:41:45.983723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:43.261 13:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.261 13:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:43.261 13:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.261 13:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.261 [2024-11-20 13:41:46.031381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:43.261 BaseBdev1 00:21:43.261 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.261 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:43.261 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:43.261 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:43.261 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:43.261 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:43.261 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:43.261 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:43.261 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.261 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.261 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.261 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:43.261 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.261 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.261 [ 00:21:43.261 { 00:21:43.261 "name": "BaseBdev1", 00:21:43.261 "aliases": [ 00:21:43.261 "b81410ab-1049-4abc-b1d6-bc885bd4441d" 00:21:43.261 ], 00:21:43.261 "product_name": "Malloc disk", 00:21:43.261 "block_size": 512, 00:21:43.261 "num_blocks": 65536, 00:21:43.261 "uuid": "b81410ab-1049-4abc-b1d6-bc885bd4441d", 00:21:43.261 "assigned_rate_limits": { 00:21:43.261 "rw_ios_per_sec": 0, 00:21:43.261 "rw_mbytes_per_sec": 0, 00:21:43.261 "r_mbytes_per_sec": 0, 00:21:43.261 "w_mbytes_per_sec": 0 00:21:43.261 }, 00:21:43.261 "claimed": true, 00:21:43.261 "claim_type": "exclusive_write", 00:21:43.261 "zoned": false, 00:21:43.261 "supported_io_types": { 00:21:43.261 "read": true, 00:21:43.261 "write": true, 00:21:43.261 "unmap": true, 00:21:43.261 "flush": true, 00:21:43.261 "reset": true, 00:21:43.261 "nvme_admin": false, 00:21:43.261 "nvme_io": false, 00:21:43.261 "nvme_io_md": false, 00:21:43.261 "write_zeroes": true, 00:21:43.261 "zcopy": true, 00:21:43.261 "get_zone_info": false, 00:21:43.261 "zone_management": false, 00:21:43.261 "zone_append": false, 00:21:43.261 "compare": false, 00:21:43.261 "compare_and_write": false, 00:21:43.261 "abort": true, 00:21:43.261 "seek_hole": false, 00:21:43.261 "seek_data": false, 00:21:43.261 "copy": true, 00:21:43.261 "nvme_iov_md": false 00:21:43.261 }, 00:21:43.261 "memory_domains": [ 00:21:43.261 { 00:21:43.261 "dma_device_id": "system", 00:21:43.261 "dma_device_type": 1 00:21:43.261 }, 00:21:43.261 { 00:21:43.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.261 "dma_device_type": 2 00:21:43.261 } 00:21:43.261 ], 00:21:43.261 "driver_specific": {} 00:21:43.261 } 00:21:43.261 ] 00:21:43.261 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.261 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:43.261 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:43.261 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:43.261 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:43.261 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:43.261 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:43.261 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:43.261 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.262 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.262 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.262 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.262 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.262 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.262 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:43.262 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.262 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.262 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.262 "name": "Existed_Raid", 00:21:43.262 "uuid": "1ea2095c-38e2-4e99-8d64-11cf40c43714", 00:21:43.262 "strip_size_kb": 0, 00:21:43.262 "state": "configuring", 00:21:43.262 "raid_level": "raid1", 00:21:43.262 "superblock": true, 00:21:43.262 "num_base_bdevs": 2, 00:21:43.262 "num_base_bdevs_discovered": 1, 00:21:43.262 "num_base_bdevs_operational": 2, 00:21:43.262 "base_bdevs_list": [ 00:21:43.262 { 00:21:43.262 "name": "BaseBdev1", 00:21:43.262 "uuid": "b81410ab-1049-4abc-b1d6-bc885bd4441d", 00:21:43.262 "is_configured": true, 00:21:43.262 "data_offset": 2048, 00:21:43.262 "data_size": 63488 00:21:43.262 }, 00:21:43.262 { 00:21:43.262 "name": "BaseBdev2", 00:21:43.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.262 "is_configured": false, 00:21:43.262 "data_offset": 0, 00:21:43.262 "data_size": 0 00:21:43.262 } 00:21:43.262 ] 00:21:43.262 }' 00:21:43.262 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.262 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.829 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:43.829 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.829 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.829 [2024-11-20 13:41:46.591624] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:43.829 [2024-11-20 13:41:46.592215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:43.829 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.829 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:43.829 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.829 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.829 [2024-11-20 13:41:46.599710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:43.829 [2024-11-20 13:41:46.602360] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:43.829 [2024-11-20 13:41:46.602422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:43.829 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.829 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:43.829 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:43.829 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:43.829 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:43.829 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:43.829 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:43.829 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:43.829 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:43.829 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.829 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.829 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.829 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.829 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.830 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:43.830 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.830 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.830 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.830 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.830 "name": "Existed_Raid", 00:21:43.830 "uuid": "a0f96d6c-4171-419e-9f07-5275104e2a77", 00:21:43.830 "strip_size_kb": 0, 00:21:43.830 "state": "configuring", 00:21:43.830 "raid_level": "raid1", 00:21:43.830 "superblock": true, 00:21:43.830 "num_base_bdevs": 2, 00:21:43.830 "num_base_bdevs_discovered": 1, 00:21:43.830 "num_base_bdevs_operational": 2, 00:21:43.830 "base_bdevs_list": [ 00:21:43.830 { 00:21:43.830 "name": "BaseBdev1", 00:21:43.830 "uuid": "b81410ab-1049-4abc-b1d6-bc885bd4441d", 00:21:43.830 "is_configured": true, 00:21:43.830 "data_offset": 2048, 00:21:43.830 "data_size": 63488 00:21:43.830 }, 00:21:43.830 { 00:21:43.830 "name": "BaseBdev2", 00:21:43.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.830 "is_configured": false, 00:21:43.830 "data_offset": 0, 00:21:43.830 "data_size": 0 00:21:43.830 } 00:21:43.830 ] 00:21:43.830 }' 00:21:43.830 13:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.830 13:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.398 [2024-11-20 13:41:47.191604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:44.398 [2024-11-20 13:41:47.192179] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:44.398 [2024-11-20 13:41:47.192206] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:44.398 BaseBdev2 00:21:44.398 [2024-11-20 13:41:47.192573] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:44.398 [2024-11-20 13:41:47.192799] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:44.398 [2024-11-20 13:41:47.192823] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:44.398 [2024-11-20 13:41:47.193036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.398 [ 00:21:44.398 { 00:21:44.398 "name": "BaseBdev2", 00:21:44.398 "aliases": [ 00:21:44.398 "4dfbe7c8-bbce-422d-87b2-180083d1f902" 00:21:44.398 ], 00:21:44.398 "product_name": "Malloc disk", 00:21:44.398 "block_size": 512, 00:21:44.398 "num_blocks": 65536, 00:21:44.398 "uuid": "4dfbe7c8-bbce-422d-87b2-180083d1f902", 00:21:44.398 "assigned_rate_limits": { 00:21:44.398 "rw_ios_per_sec": 0, 00:21:44.398 "rw_mbytes_per_sec": 0, 00:21:44.398 "r_mbytes_per_sec": 0, 00:21:44.398 "w_mbytes_per_sec": 0 00:21:44.398 }, 00:21:44.398 "claimed": true, 00:21:44.398 "claim_type": "exclusive_write", 00:21:44.398 "zoned": false, 00:21:44.398 "supported_io_types": { 00:21:44.398 "read": true, 00:21:44.398 "write": true, 00:21:44.398 "unmap": true, 00:21:44.398 "flush": true, 00:21:44.398 "reset": true, 00:21:44.398 "nvme_admin": false, 00:21:44.398 "nvme_io": false, 00:21:44.398 "nvme_io_md": false, 00:21:44.398 "write_zeroes": true, 00:21:44.398 "zcopy": true, 00:21:44.398 "get_zone_info": false, 00:21:44.398 "zone_management": false, 00:21:44.398 "zone_append": false, 00:21:44.398 "compare": false, 00:21:44.398 "compare_and_write": false, 00:21:44.398 "abort": true, 00:21:44.398 "seek_hole": false, 00:21:44.398 "seek_data": false, 00:21:44.398 "copy": true, 00:21:44.398 "nvme_iov_md": false 00:21:44.398 }, 00:21:44.398 "memory_domains": [ 00:21:44.398 { 00:21:44.398 "dma_device_id": "system", 00:21:44.398 "dma_device_type": 1 00:21:44.398 }, 00:21:44.398 { 00:21:44.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.398 "dma_device_type": 2 00:21:44.398 } 00:21:44.398 ], 00:21:44.398 "driver_specific": {} 00:21:44.398 } 00:21:44.398 ] 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.398 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.398 "name": "Existed_Raid", 00:21:44.398 "uuid": "a0f96d6c-4171-419e-9f07-5275104e2a77", 00:21:44.398 "strip_size_kb": 0, 00:21:44.398 "state": "online", 00:21:44.398 "raid_level": "raid1", 00:21:44.398 "superblock": true, 00:21:44.398 "num_base_bdevs": 2, 00:21:44.398 "num_base_bdevs_discovered": 2, 00:21:44.398 "num_base_bdevs_operational": 2, 00:21:44.398 "base_bdevs_list": [ 00:21:44.398 { 00:21:44.398 "name": "BaseBdev1", 00:21:44.398 "uuid": "b81410ab-1049-4abc-b1d6-bc885bd4441d", 00:21:44.399 "is_configured": true, 00:21:44.399 "data_offset": 2048, 00:21:44.399 "data_size": 63488 00:21:44.399 }, 00:21:44.399 { 00:21:44.399 "name": "BaseBdev2", 00:21:44.399 "uuid": "4dfbe7c8-bbce-422d-87b2-180083d1f902", 00:21:44.399 "is_configured": true, 00:21:44.399 "data_offset": 2048, 00:21:44.399 "data_size": 63488 00:21:44.399 } 00:21:44.399 ] 00:21:44.399 }' 00:21:44.399 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.399 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.966 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:44.966 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:44.966 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:44.966 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:44.966 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:21:44.966 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:44.966 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:44.966 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.966 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.966 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:44.966 [2024-11-20 13:41:47.744191] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:44.966 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.966 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:44.966 "name": "Existed_Raid", 00:21:44.966 "aliases": [ 00:21:44.966 "a0f96d6c-4171-419e-9f07-5275104e2a77" 00:21:44.966 ], 00:21:44.966 "product_name": "Raid Volume", 00:21:44.966 "block_size": 512, 00:21:44.966 "num_blocks": 63488, 00:21:44.966 "uuid": "a0f96d6c-4171-419e-9f07-5275104e2a77", 00:21:44.966 "assigned_rate_limits": { 00:21:44.966 "rw_ios_per_sec": 0, 00:21:44.966 "rw_mbytes_per_sec": 0, 00:21:44.966 "r_mbytes_per_sec": 0, 00:21:44.966 "w_mbytes_per_sec": 0 00:21:44.966 }, 00:21:44.966 "claimed": false, 00:21:44.966 "zoned": false, 00:21:44.966 "supported_io_types": { 00:21:44.966 "read": true, 00:21:44.966 "write": true, 00:21:44.966 "unmap": false, 00:21:44.966 "flush": false, 00:21:44.966 "reset": true, 00:21:44.966 "nvme_admin": false, 00:21:44.966 "nvme_io": false, 00:21:44.966 "nvme_io_md": false, 00:21:44.966 "write_zeroes": true, 00:21:44.966 "zcopy": false, 00:21:44.966 "get_zone_info": false, 00:21:44.966 "zone_management": false, 00:21:44.966 "zone_append": false, 00:21:44.966 "compare": false, 00:21:44.966 "compare_and_write": false, 00:21:44.966 "abort": false, 00:21:44.966 "seek_hole": false, 00:21:44.966 "seek_data": false, 00:21:44.966 "copy": false, 00:21:44.966 "nvme_iov_md": false 00:21:44.966 }, 00:21:44.966 "memory_domains": [ 00:21:44.966 { 00:21:44.966 "dma_device_id": "system", 00:21:44.966 "dma_device_type": 1 00:21:44.966 }, 00:21:44.966 { 00:21:44.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.966 "dma_device_type": 2 00:21:44.966 }, 00:21:44.966 { 00:21:44.966 "dma_device_id": "system", 00:21:44.966 "dma_device_type": 1 00:21:44.966 }, 00:21:44.966 { 00:21:44.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.966 "dma_device_type": 2 00:21:44.966 } 00:21:44.966 ], 00:21:44.966 "driver_specific": { 00:21:44.966 "raid": { 00:21:44.966 "uuid": "a0f96d6c-4171-419e-9f07-5275104e2a77", 00:21:44.966 "strip_size_kb": 0, 00:21:44.966 "state": "online", 00:21:44.966 "raid_level": "raid1", 00:21:44.966 "superblock": true, 00:21:44.966 "num_base_bdevs": 2, 00:21:44.966 "num_base_bdevs_discovered": 2, 00:21:44.966 "num_base_bdevs_operational": 2, 00:21:44.966 "base_bdevs_list": [ 00:21:44.966 { 00:21:44.966 "name": "BaseBdev1", 00:21:44.966 "uuid": "b81410ab-1049-4abc-b1d6-bc885bd4441d", 00:21:44.966 "is_configured": true, 00:21:44.966 "data_offset": 2048, 00:21:44.966 "data_size": 63488 00:21:44.966 }, 00:21:44.966 { 00:21:44.966 "name": "BaseBdev2", 00:21:44.966 "uuid": "4dfbe7c8-bbce-422d-87b2-180083d1f902", 00:21:44.966 "is_configured": true, 00:21:44.966 "data_offset": 2048, 00:21:44.966 "data_size": 63488 00:21:44.966 } 00:21:44.966 ] 00:21:44.966 } 00:21:44.966 } 00:21:44.966 }' 00:21:44.966 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:44.966 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:44.966 BaseBdev2' 00:21:44.966 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:45.225 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:45.225 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:45.225 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:45.225 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:45.225 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.225 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.225 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.225 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:45.225 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:45.225 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:45.225 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:45.225 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.225 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.225 13:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:45.225 13:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.225 [2024-11-20 13:41:48.015996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.225 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.484 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:45.484 "name": "Existed_Raid", 00:21:45.484 "uuid": "a0f96d6c-4171-419e-9f07-5275104e2a77", 00:21:45.484 "strip_size_kb": 0, 00:21:45.484 "state": "online", 00:21:45.484 "raid_level": "raid1", 00:21:45.484 "superblock": true, 00:21:45.484 "num_base_bdevs": 2, 00:21:45.484 "num_base_bdevs_discovered": 1, 00:21:45.484 "num_base_bdevs_operational": 1, 00:21:45.484 "base_bdevs_list": [ 00:21:45.484 { 00:21:45.484 "name": null, 00:21:45.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.484 "is_configured": false, 00:21:45.484 "data_offset": 0, 00:21:45.484 "data_size": 63488 00:21:45.484 }, 00:21:45.484 { 00:21:45.484 "name": "BaseBdev2", 00:21:45.484 "uuid": "4dfbe7c8-bbce-422d-87b2-180083d1f902", 00:21:45.484 "is_configured": true, 00:21:45.484 "data_offset": 2048, 00:21:45.484 "data_size": 63488 00:21:45.484 } 00:21:45.484 ] 00:21:45.484 }' 00:21:45.484 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:45.484 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.051 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:46.051 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:46.051 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:46.051 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.051 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.051 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.051 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.051 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:46.051 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:46.051 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:46.051 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.051 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.051 [2024-11-20 13:41:48.743572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:46.051 [2024-11-20 13:41:48.743742] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:46.051 [2024-11-20 13:41:48.834652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:46.051 [2024-11-20 13:41:48.834729] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:46.051 [2024-11-20 13:41:48.834760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:46.051 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.051 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:46.051 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:46.051 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.051 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:46.051 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.051 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.051 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.052 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:46.052 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:46.052 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:46.052 13:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63097 00:21:46.052 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63097 ']' 00:21:46.052 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63097 00:21:46.052 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:46.052 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.052 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63097 00:21:46.052 killing process with pid 63097 00:21:46.052 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:46.052 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:46.052 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63097' 00:21:46.052 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63097 00:21:46.052 [2024-11-20 13:41:48.925789] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:46.052 13:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63097 00:21:46.052 [2024-11-20 13:41:48.941368] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:47.428 13:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:21:47.428 00:21:47.428 real 0m5.776s 00:21:47.428 user 0m8.735s 00:21:47.428 sys 0m0.801s 00:21:47.428 ************************************ 00:21:47.428 END TEST raid_state_function_test_sb 00:21:47.428 ************************************ 00:21:47.428 13:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:47.428 13:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.428 13:41:50 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:21:47.429 13:41:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:47.429 13:41:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:47.429 13:41:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:47.429 ************************************ 00:21:47.429 START TEST raid_superblock_test 00:21:47.429 ************************************ 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63360 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63360 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63360 ']' 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:47.429 13:41:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.429 [2024-11-20 13:41:50.240766] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:21:47.429 [2024-11-20 13:41:50.241001] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63360 ] 00:21:47.687 [2024-11-20 13:41:50.436253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.946 [2024-11-20 13:41:50.604790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.946 [2024-11-20 13:41:50.830683] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:47.946 [2024-11-20 13:41:50.830769] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:48.515 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:48.515 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:21:48.515 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:48.515 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:48.515 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:48.515 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:48.515 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:48.515 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:48.515 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:48.515 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:48.515 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:21:48.515 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.515 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.515 malloc1 00:21:48.515 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.515 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:48.515 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.515 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.515 [2024-11-20 13:41:51.357376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:48.515 [2024-11-20 13:41:51.357622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.515 [2024-11-20 13:41:51.357674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:48.515 [2024-11-20 13:41:51.357695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.515 [2024-11-20 13:41:51.360741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.515 [2024-11-20 13:41:51.360967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:48.515 pt1 00:21:48.515 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.515 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:48.515 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:48.515 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.516 malloc2 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.516 [2024-11-20 13:41:51.412406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:48.516 [2024-11-20 13:41:51.412490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.516 [2024-11-20 13:41:51.412536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:48.516 [2024-11-20 13:41:51.412554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.516 [2024-11-20 13:41:51.415388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.516 [2024-11-20 13:41:51.415441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:48.516 pt2 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.516 [2024-11-20 13:41:51.420498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:48.516 [2024-11-20 13:41:51.423138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:48.516 [2024-11-20 13:41:51.423377] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:48.516 [2024-11-20 13:41:51.423404] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:48.516 [2024-11-20 13:41:51.423787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:48.516 [2024-11-20 13:41:51.424043] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:48.516 [2024-11-20 13:41:51.424076] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:48.516 [2024-11-20 13:41:51.424266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:48.516 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:48.775 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.775 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.775 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.775 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.775 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.775 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:48.775 "name": "raid_bdev1", 00:21:48.775 "uuid": "84e307b9-038f-42cd-bcaf-52d1342c4d0f", 00:21:48.775 "strip_size_kb": 0, 00:21:48.775 "state": "online", 00:21:48.775 "raid_level": "raid1", 00:21:48.775 "superblock": true, 00:21:48.775 "num_base_bdevs": 2, 00:21:48.775 "num_base_bdevs_discovered": 2, 00:21:48.775 "num_base_bdevs_operational": 2, 00:21:48.775 "base_bdevs_list": [ 00:21:48.775 { 00:21:48.775 "name": "pt1", 00:21:48.775 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:48.775 "is_configured": true, 00:21:48.775 "data_offset": 2048, 00:21:48.775 "data_size": 63488 00:21:48.775 }, 00:21:48.775 { 00:21:48.775 "name": "pt2", 00:21:48.775 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:48.775 "is_configured": true, 00:21:48.775 "data_offset": 2048, 00:21:48.775 "data_size": 63488 00:21:48.775 } 00:21:48.775 ] 00:21:48.775 }' 00:21:48.775 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:48.775 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.034 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:49.034 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:49.034 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:49.034 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:49.034 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:49.034 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:49.034 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:49.034 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:49.034 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.034 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.034 [2024-11-20 13:41:51.937111] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:49.293 13:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.293 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:49.293 "name": "raid_bdev1", 00:21:49.293 "aliases": [ 00:21:49.293 "84e307b9-038f-42cd-bcaf-52d1342c4d0f" 00:21:49.293 ], 00:21:49.293 "product_name": "Raid Volume", 00:21:49.293 "block_size": 512, 00:21:49.293 "num_blocks": 63488, 00:21:49.293 "uuid": "84e307b9-038f-42cd-bcaf-52d1342c4d0f", 00:21:49.293 "assigned_rate_limits": { 00:21:49.293 "rw_ios_per_sec": 0, 00:21:49.293 "rw_mbytes_per_sec": 0, 00:21:49.293 "r_mbytes_per_sec": 0, 00:21:49.293 "w_mbytes_per_sec": 0 00:21:49.293 }, 00:21:49.293 "claimed": false, 00:21:49.293 "zoned": false, 00:21:49.293 "supported_io_types": { 00:21:49.293 "read": true, 00:21:49.293 "write": true, 00:21:49.293 "unmap": false, 00:21:49.294 "flush": false, 00:21:49.294 "reset": true, 00:21:49.294 "nvme_admin": false, 00:21:49.294 "nvme_io": false, 00:21:49.294 "nvme_io_md": false, 00:21:49.294 "write_zeroes": true, 00:21:49.294 "zcopy": false, 00:21:49.294 "get_zone_info": false, 00:21:49.294 "zone_management": false, 00:21:49.294 "zone_append": false, 00:21:49.294 "compare": false, 00:21:49.294 "compare_and_write": false, 00:21:49.294 "abort": false, 00:21:49.294 "seek_hole": false, 00:21:49.294 "seek_data": false, 00:21:49.294 "copy": false, 00:21:49.294 "nvme_iov_md": false 00:21:49.294 }, 00:21:49.294 "memory_domains": [ 00:21:49.294 { 00:21:49.294 "dma_device_id": "system", 00:21:49.294 "dma_device_type": 1 00:21:49.294 }, 00:21:49.294 { 00:21:49.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.294 "dma_device_type": 2 00:21:49.294 }, 00:21:49.294 { 00:21:49.294 "dma_device_id": "system", 00:21:49.294 "dma_device_type": 1 00:21:49.294 }, 00:21:49.294 { 00:21:49.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.294 "dma_device_type": 2 00:21:49.294 } 00:21:49.294 ], 00:21:49.294 "driver_specific": { 00:21:49.294 "raid": { 00:21:49.294 "uuid": "84e307b9-038f-42cd-bcaf-52d1342c4d0f", 00:21:49.294 "strip_size_kb": 0, 00:21:49.294 "state": "online", 00:21:49.294 "raid_level": "raid1", 00:21:49.294 "superblock": true, 00:21:49.294 "num_base_bdevs": 2, 00:21:49.294 "num_base_bdevs_discovered": 2, 00:21:49.294 "num_base_bdevs_operational": 2, 00:21:49.294 "base_bdevs_list": [ 00:21:49.294 { 00:21:49.294 "name": "pt1", 00:21:49.294 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:49.294 "is_configured": true, 00:21:49.294 "data_offset": 2048, 00:21:49.294 "data_size": 63488 00:21:49.294 }, 00:21:49.294 { 00:21:49.294 "name": "pt2", 00:21:49.294 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:49.294 "is_configured": true, 00:21:49.294 "data_offset": 2048, 00:21:49.294 "data_size": 63488 00:21:49.294 } 00:21:49.294 ] 00:21:49.294 } 00:21:49.294 } 00:21:49.294 }' 00:21:49.294 13:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:49.294 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:49.294 pt2' 00:21:49.294 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:49.294 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:49.294 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:49.294 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:49.294 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.294 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.294 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:49.294 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.294 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:49.294 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:49.294 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:49.294 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:49.294 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:49.294 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.294 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.294 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.294 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:49.294 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.554 [2024-11-20 13:41:52.213195] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=84e307b9-038f-42cd-bcaf-52d1342c4d0f 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 84e307b9-038f-42cd-bcaf-52d1342c4d0f ']' 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.554 [2024-11-20 13:41:52.260780] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:49.554 [2024-11-20 13:41:52.260819] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:49.554 [2024-11-20 13:41:52.260946] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:49.554 [2024-11-20 13:41:52.261076] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:49.554 [2024-11-20 13:41:52.261109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:49.554 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:49.555 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:49.555 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.555 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.555 [2024-11-20 13:41:52.396889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:49.555 [2024-11-20 13:41:52.399842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:49.555 [2024-11-20 13:41:52.400098] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:49.555 [2024-11-20 13:41:52.400350] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:49.555 [2024-11-20 13:41:52.400556] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:49.555 [2024-11-20 13:41:52.400704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:49.555 request: 00:21:49.555 { 00:21:49.555 "name": "raid_bdev1", 00:21:49.555 "raid_level": "raid1", 00:21:49.555 "base_bdevs": [ 00:21:49.555 "malloc1", 00:21:49.555 "malloc2" 00:21:49.555 ], 00:21:49.555 "superblock": false, 00:21:49.555 "method": "bdev_raid_create", 00:21:49.555 "req_id": 1 00:21:49.555 } 00:21:49.555 Got JSON-RPC error response 00:21:49.555 response: 00:21:49.555 { 00:21:49.555 "code": -17, 00:21:49.555 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:49.555 } 00:21:49.555 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:49.555 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:21:49.555 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:49.555 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:49.555 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:49.555 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.555 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:49.555 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.555 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.555 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.555 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:49.555 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:49.555 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:49.555 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.555 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.555 [2024-11-20 13:41:52.465196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:49.555 [2024-11-20 13:41:52.465437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:49.555 [2024-11-20 13:41:52.465526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:49.555 [2024-11-20 13:41:52.465833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:49.814 [2024-11-20 13:41:52.469165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:49.814 [2024-11-20 13:41:52.469346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:49.814 [2024-11-20 13:41:52.469540] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:49.814 [2024-11-20 13:41:52.469647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:49.814 pt1 00:21:49.814 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.814 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:49.814 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:49.814 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:49.814 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:49.814 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:49.814 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:49.814 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:49.814 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:49.814 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:49.814 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:49.814 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.814 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.814 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.814 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.814 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.814 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:49.814 "name": "raid_bdev1", 00:21:49.814 "uuid": "84e307b9-038f-42cd-bcaf-52d1342c4d0f", 00:21:49.814 "strip_size_kb": 0, 00:21:49.814 "state": "configuring", 00:21:49.814 "raid_level": "raid1", 00:21:49.814 "superblock": true, 00:21:49.814 "num_base_bdevs": 2, 00:21:49.814 "num_base_bdevs_discovered": 1, 00:21:49.814 "num_base_bdevs_operational": 2, 00:21:49.814 "base_bdevs_list": [ 00:21:49.814 { 00:21:49.814 "name": "pt1", 00:21:49.814 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:49.814 "is_configured": true, 00:21:49.814 "data_offset": 2048, 00:21:49.814 "data_size": 63488 00:21:49.814 }, 00:21:49.814 { 00:21:49.814 "name": null, 00:21:49.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:49.814 "is_configured": false, 00:21:49.814 "data_offset": 2048, 00:21:49.814 "data_size": 63488 00:21:49.814 } 00:21:49.814 ] 00:21:49.814 }' 00:21:49.814 13:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:49.814 13:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.383 [2024-11-20 13:41:53.009857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:50.383 [2024-11-20 13:41:53.009992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:50.383 [2024-11-20 13:41:53.010031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:50.383 [2024-11-20 13:41:53.010054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:50.383 [2024-11-20 13:41:53.010709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:50.383 [2024-11-20 13:41:53.010786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:50.383 [2024-11-20 13:41:53.010932] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:50.383 [2024-11-20 13:41:53.010990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:50.383 [2024-11-20 13:41:53.011166] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:50.383 [2024-11-20 13:41:53.011207] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:50.383 [2024-11-20 13:41:53.011531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:50.383 [2024-11-20 13:41:53.011749] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:50.383 [2024-11-20 13:41:53.011768] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:50.383 [2024-11-20 13:41:53.011995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:50.383 pt2 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.383 "name": "raid_bdev1", 00:21:50.383 "uuid": "84e307b9-038f-42cd-bcaf-52d1342c4d0f", 00:21:50.383 "strip_size_kb": 0, 00:21:50.383 "state": "online", 00:21:50.383 "raid_level": "raid1", 00:21:50.383 "superblock": true, 00:21:50.383 "num_base_bdevs": 2, 00:21:50.383 "num_base_bdevs_discovered": 2, 00:21:50.383 "num_base_bdevs_operational": 2, 00:21:50.383 "base_bdevs_list": [ 00:21:50.383 { 00:21:50.383 "name": "pt1", 00:21:50.383 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:50.383 "is_configured": true, 00:21:50.383 "data_offset": 2048, 00:21:50.383 "data_size": 63488 00:21:50.383 }, 00:21:50.383 { 00:21:50.383 "name": "pt2", 00:21:50.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:50.383 "is_configured": true, 00:21:50.383 "data_offset": 2048, 00:21:50.383 "data_size": 63488 00:21:50.383 } 00:21:50.383 ] 00:21:50.383 }' 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.383 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.642 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:50.642 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:50.642 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:50.642 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:50.642 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:50.642 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:50.642 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:50.642 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:50.642 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.642 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.642 [2024-11-20 13:41:53.550467] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:50.904 "name": "raid_bdev1", 00:21:50.904 "aliases": [ 00:21:50.904 "84e307b9-038f-42cd-bcaf-52d1342c4d0f" 00:21:50.904 ], 00:21:50.904 "product_name": "Raid Volume", 00:21:50.904 "block_size": 512, 00:21:50.904 "num_blocks": 63488, 00:21:50.904 "uuid": "84e307b9-038f-42cd-bcaf-52d1342c4d0f", 00:21:50.904 "assigned_rate_limits": { 00:21:50.904 "rw_ios_per_sec": 0, 00:21:50.904 "rw_mbytes_per_sec": 0, 00:21:50.904 "r_mbytes_per_sec": 0, 00:21:50.904 "w_mbytes_per_sec": 0 00:21:50.904 }, 00:21:50.904 "claimed": false, 00:21:50.904 "zoned": false, 00:21:50.904 "supported_io_types": { 00:21:50.904 "read": true, 00:21:50.904 "write": true, 00:21:50.904 "unmap": false, 00:21:50.904 "flush": false, 00:21:50.904 "reset": true, 00:21:50.904 "nvme_admin": false, 00:21:50.904 "nvme_io": false, 00:21:50.904 "nvme_io_md": false, 00:21:50.904 "write_zeroes": true, 00:21:50.904 "zcopy": false, 00:21:50.904 "get_zone_info": false, 00:21:50.904 "zone_management": false, 00:21:50.904 "zone_append": false, 00:21:50.904 "compare": false, 00:21:50.904 "compare_and_write": false, 00:21:50.904 "abort": false, 00:21:50.904 "seek_hole": false, 00:21:50.904 "seek_data": false, 00:21:50.904 "copy": false, 00:21:50.904 "nvme_iov_md": false 00:21:50.904 }, 00:21:50.904 "memory_domains": [ 00:21:50.904 { 00:21:50.904 "dma_device_id": "system", 00:21:50.904 "dma_device_type": 1 00:21:50.904 }, 00:21:50.904 { 00:21:50.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.904 "dma_device_type": 2 00:21:50.904 }, 00:21:50.904 { 00:21:50.904 "dma_device_id": "system", 00:21:50.904 "dma_device_type": 1 00:21:50.904 }, 00:21:50.904 { 00:21:50.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.904 "dma_device_type": 2 00:21:50.904 } 00:21:50.904 ], 00:21:50.904 "driver_specific": { 00:21:50.904 "raid": { 00:21:50.904 "uuid": "84e307b9-038f-42cd-bcaf-52d1342c4d0f", 00:21:50.904 "strip_size_kb": 0, 00:21:50.904 "state": "online", 00:21:50.904 "raid_level": "raid1", 00:21:50.904 "superblock": true, 00:21:50.904 "num_base_bdevs": 2, 00:21:50.904 "num_base_bdevs_discovered": 2, 00:21:50.904 "num_base_bdevs_operational": 2, 00:21:50.904 "base_bdevs_list": [ 00:21:50.904 { 00:21:50.904 "name": "pt1", 00:21:50.904 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:50.904 "is_configured": true, 00:21:50.904 "data_offset": 2048, 00:21:50.904 "data_size": 63488 00:21:50.904 }, 00:21:50.904 { 00:21:50.904 "name": "pt2", 00:21:50.904 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:50.904 "is_configured": true, 00:21:50.904 "data_offset": 2048, 00:21:50.904 "data_size": 63488 00:21:50.904 } 00:21:50.904 ] 00:21:50.904 } 00:21:50.904 } 00:21:50.904 }' 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:50.904 pt2' 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.904 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.904 [2024-11-20 13:41:53.806556] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 84e307b9-038f-42cd-bcaf-52d1342c4d0f '!=' 84e307b9-038f-42cd-bcaf-52d1342c4d0f ']' 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.167 [2024-11-20 13:41:53.862311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:51.167 "name": "raid_bdev1", 00:21:51.167 "uuid": "84e307b9-038f-42cd-bcaf-52d1342c4d0f", 00:21:51.167 "strip_size_kb": 0, 00:21:51.167 "state": "online", 00:21:51.167 "raid_level": "raid1", 00:21:51.167 "superblock": true, 00:21:51.167 "num_base_bdevs": 2, 00:21:51.167 "num_base_bdevs_discovered": 1, 00:21:51.167 "num_base_bdevs_operational": 1, 00:21:51.167 "base_bdevs_list": [ 00:21:51.167 { 00:21:51.167 "name": null, 00:21:51.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.167 "is_configured": false, 00:21:51.167 "data_offset": 0, 00:21:51.167 "data_size": 63488 00:21:51.167 }, 00:21:51.167 { 00:21:51.167 "name": "pt2", 00:21:51.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:51.167 "is_configured": true, 00:21:51.167 "data_offset": 2048, 00:21:51.167 "data_size": 63488 00:21:51.167 } 00:21:51.167 ] 00:21:51.167 }' 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:51.167 13:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.735 [2024-11-20 13:41:54.406469] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:51.735 [2024-11-20 13:41:54.406546] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:51.735 [2024-11-20 13:41:54.406664] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:51.735 [2024-11-20 13:41:54.406753] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:51.735 [2024-11-20 13:41:54.406776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.735 [2024-11-20 13:41:54.486411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:51.735 [2024-11-20 13:41:54.486552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:51.735 [2024-11-20 13:41:54.486580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:51.735 [2024-11-20 13:41:54.486599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:51.735 [2024-11-20 13:41:54.489993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:51.735 [2024-11-20 13:41:54.490066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:51.735 [2024-11-20 13:41:54.490211] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:51.735 [2024-11-20 13:41:54.490281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:51.735 [2024-11-20 13:41:54.490436] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:51.735 [2024-11-20 13:41:54.490463] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:51.735 [2024-11-20 13:41:54.490830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:51.735 [2024-11-20 13:41:54.491102] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:51.735 [2024-11-20 13:41:54.491133] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:51.735 [2024-11-20 13:41:54.491369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:51.735 pt2 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:51.735 "name": "raid_bdev1", 00:21:51.735 "uuid": "84e307b9-038f-42cd-bcaf-52d1342c4d0f", 00:21:51.735 "strip_size_kb": 0, 00:21:51.735 "state": "online", 00:21:51.735 "raid_level": "raid1", 00:21:51.735 "superblock": true, 00:21:51.735 "num_base_bdevs": 2, 00:21:51.735 "num_base_bdevs_discovered": 1, 00:21:51.735 "num_base_bdevs_operational": 1, 00:21:51.735 "base_bdevs_list": [ 00:21:51.735 { 00:21:51.735 "name": null, 00:21:51.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.735 "is_configured": false, 00:21:51.735 "data_offset": 2048, 00:21:51.735 "data_size": 63488 00:21:51.735 }, 00:21:51.735 { 00:21:51.735 "name": "pt2", 00:21:51.735 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:51.735 "is_configured": true, 00:21:51.735 "data_offset": 2048, 00:21:51.735 "data_size": 63488 00:21:51.735 } 00:21:51.735 ] 00:21:51.735 }' 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:51.735 13:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.303 [2024-11-20 13:41:55.030791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:52.303 [2024-11-20 13:41:55.031015] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:52.303 [2024-11-20 13:41:55.031258] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:52.303 [2024-11-20 13:41:55.031355] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:52.303 [2024-11-20 13:41:55.031375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.303 [2024-11-20 13:41:55.098871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:52.303 [2024-11-20 13:41:55.099003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:52.303 [2024-11-20 13:41:55.099048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:52.303 [2024-11-20 13:41:55.099066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:52.303 [2024-11-20 13:41:55.102311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:52.303 [2024-11-20 13:41:55.102418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:52.303 [2024-11-20 13:41:55.102577] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:52.303 [2024-11-20 13:41:55.102641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:52.303 [2024-11-20 13:41:55.102884] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:52.303 [2024-11-20 13:41:55.102906] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:52.303 [2024-11-20 13:41:55.102967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:52.303 [2024-11-20 13:41:55.103045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:52.303 [2024-11-20 13:41:55.103238] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:52.303 [2024-11-20 13:41:55.103264] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:52.303 pt1 00:21:52.303 [2024-11-20 13:41:55.103618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:52.303 [2024-11-20 13:41:55.103847] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:52.303 [2024-11-20 13:41:55.103873] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:52.303 [2024-11-20 13:41:55.104099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:52.303 "name": "raid_bdev1", 00:21:52.303 "uuid": "84e307b9-038f-42cd-bcaf-52d1342c4d0f", 00:21:52.303 "strip_size_kb": 0, 00:21:52.303 "state": "online", 00:21:52.303 "raid_level": "raid1", 00:21:52.303 "superblock": true, 00:21:52.303 "num_base_bdevs": 2, 00:21:52.303 "num_base_bdevs_discovered": 1, 00:21:52.303 "num_base_bdevs_operational": 1, 00:21:52.303 "base_bdevs_list": [ 00:21:52.303 { 00:21:52.303 "name": null, 00:21:52.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.303 "is_configured": false, 00:21:52.303 "data_offset": 2048, 00:21:52.303 "data_size": 63488 00:21:52.303 }, 00:21:52.303 { 00:21:52.303 "name": "pt2", 00:21:52.303 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:52.303 "is_configured": true, 00:21:52.303 "data_offset": 2048, 00:21:52.303 "data_size": 63488 00:21:52.303 } 00:21:52.303 ] 00:21:52.303 }' 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:52.303 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.872 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:52.872 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:52.872 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.872 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.872 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.872 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:52.872 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:52.872 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.872 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:52.872 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.872 [2024-11-20 13:41:55.703546] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:52.872 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.872 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 84e307b9-038f-42cd-bcaf-52d1342c4d0f '!=' 84e307b9-038f-42cd-bcaf-52d1342c4d0f ']' 00:21:52.872 13:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63360 00:21:52.872 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63360 ']' 00:21:52.872 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63360 00:21:52.872 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:21:52.872 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:52.872 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63360 00:21:52.872 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:52.872 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:52.872 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63360' 00:21:52.872 killing process with pid 63360 00:21:52.872 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63360 00:21:52.872 [2024-11-20 13:41:55.784097] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:52.872 13:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63360 00:21:52.872 [2024-11-20 13:41:55.784255] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:52.872 [2024-11-20 13:41:55.784366] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:52.872 [2024-11-20 13:41:55.784396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:53.131 [2024-11-20 13:41:55.978963] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:54.502 13:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:21:54.502 00:21:54.502 real 0m7.026s 00:21:54.502 user 0m11.001s 00:21:54.502 sys 0m1.054s 00:21:54.502 ************************************ 00:21:54.502 END TEST raid_superblock_test 00:21:54.502 ************************************ 00:21:54.502 13:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:54.502 13:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.502 13:41:57 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:21:54.502 13:41:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:54.502 13:41:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:54.502 13:41:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:54.502 ************************************ 00:21:54.502 START TEST raid_read_error_test 00:21:54.502 ************************************ 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CGiHYsu5nU 00:21:54.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63696 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63696 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63696 ']' 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:54.502 13:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.502 [2024-11-20 13:41:57.318184] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:21:54.502 [2024-11-20 13:41:57.318640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63696 ] 00:21:54.761 [2024-11-20 13:41:57.512616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.020 [2024-11-20 13:41:57.684817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.278 [2024-11-20 13:41:57.945691] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:55.278 [2024-11-20 13:41:57.945777] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.538 BaseBdev1_malloc 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.538 true 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.538 [2024-11-20 13:41:58.373956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:55.538 [2024-11-20 13:41:58.374040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.538 [2024-11-20 13:41:58.374077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:55.538 [2024-11-20 13:41:58.374094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.538 [2024-11-20 13:41:58.377131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.538 [2024-11-20 13:41:58.377182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:55.538 BaseBdev1 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.538 BaseBdev2_malloc 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.538 true 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.538 [2024-11-20 13:41:58.431879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:55.538 [2024-11-20 13:41:58.431984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.538 [2024-11-20 13:41:58.432011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:55.538 [2024-11-20 13:41:58.432042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.538 [2024-11-20 13:41:58.435098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.538 [2024-11-20 13:41:58.435281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:55.538 BaseBdev2 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.538 [2024-11-20 13:41:58.440010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:55.538 [2024-11-20 13:41:58.442778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:55.538 [2024-11-20 13:41:58.443106] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:55.538 [2024-11-20 13:41:58.443131] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:55.538 [2024-11-20 13:41:58.443458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:55.538 [2024-11-20 13:41:58.443713] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:55.538 [2024-11-20 13:41:58.443730] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:55.538 [2024-11-20 13:41:58.443980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.538 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.798 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.798 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:55.798 "name": "raid_bdev1", 00:21:55.798 "uuid": "e3660bab-f102-449b-8604-9229216d14d6", 00:21:55.798 "strip_size_kb": 0, 00:21:55.798 "state": "online", 00:21:55.798 "raid_level": "raid1", 00:21:55.798 "superblock": true, 00:21:55.798 "num_base_bdevs": 2, 00:21:55.798 "num_base_bdevs_discovered": 2, 00:21:55.798 "num_base_bdevs_operational": 2, 00:21:55.798 "base_bdevs_list": [ 00:21:55.798 { 00:21:55.798 "name": "BaseBdev1", 00:21:55.798 "uuid": "bee0b112-78e8-5464-a84e-4e6cba3ae8d5", 00:21:55.798 "is_configured": true, 00:21:55.798 "data_offset": 2048, 00:21:55.798 "data_size": 63488 00:21:55.798 }, 00:21:55.798 { 00:21:55.798 "name": "BaseBdev2", 00:21:55.798 "uuid": "ac504bb4-e00a-5274-b838-20c9a2ef9e0d", 00:21:55.798 "is_configured": true, 00:21:55.798 "data_offset": 2048, 00:21:55.798 "data_size": 63488 00:21:55.798 } 00:21:55.798 ] 00:21:55.798 }' 00:21:55.798 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:55.798 13:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.057 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:21:56.057 13:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:21:56.316 [2024-11-20 13:41:59.085689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:57.252 13:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:57.252 13:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.252 13:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.252 13:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.252 13:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:21:57.252 13:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:57.252 13:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:21:57.252 13:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:21:57.252 13:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:57.252 13:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:57.252 13:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:57.252 13:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:57.252 13:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:57.252 13:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:57.252 13:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.252 13:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.252 13:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.252 13:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.252 13:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.252 13:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.252 13:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.252 13:41:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.252 13:41:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.252 13:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:57.252 "name": "raid_bdev1", 00:21:57.252 "uuid": "e3660bab-f102-449b-8604-9229216d14d6", 00:21:57.252 "strip_size_kb": 0, 00:21:57.252 "state": "online", 00:21:57.252 "raid_level": "raid1", 00:21:57.252 "superblock": true, 00:21:57.252 "num_base_bdevs": 2, 00:21:57.252 "num_base_bdevs_discovered": 2, 00:21:57.252 "num_base_bdevs_operational": 2, 00:21:57.252 "base_bdevs_list": [ 00:21:57.252 { 00:21:57.252 "name": "BaseBdev1", 00:21:57.252 "uuid": "bee0b112-78e8-5464-a84e-4e6cba3ae8d5", 00:21:57.252 "is_configured": true, 00:21:57.252 "data_offset": 2048, 00:21:57.252 "data_size": 63488 00:21:57.252 }, 00:21:57.252 { 00:21:57.252 "name": "BaseBdev2", 00:21:57.252 "uuid": "ac504bb4-e00a-5274-b838-20c9a2ef9e0d", 00:21:57.252 "is_configured": true, 00:21:57.252 "data_offset": 2048, 00:21:57.252 "data_size": 63488 00:21:57.252 } 00:21:57.252 ] 00:21:57.252 }' 00:21:57.252 13:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:57.252 13:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.819 13:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:57.819 13:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.819 13:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.819 [2024-11-20 13:42:00.479830] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:57.819 [2024-11-20 13:42:00.479868] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:57.819 { 00:21:57.819 "results": [ 00:21:57.819 { 00:21:57.819 "job": "raid_bdev1", 00:21:57.819 "core_mask": "0x1", 00:21:57.819 "workload": "randrw", 00:21:57.819 "percentage": 50, 00:21:57.819 "status": "finished", 00:21:57.819 "queue_depth": 1, 00:21:57.819 "io_size": 131072, 00:21:57.820 "runtime": 1.391519, 00:21:57.820 "iops": 12096.133793358193, 00:21:57.820 "mibps": 1512.016724169774, 00:21:57.820 "io_failed": 0, 00:21:57.820 "io_timeout": 0, 00:21:57.820 "avg_latency_us": 78.22259937780849, 00:21:57.820 "min_latency_us": 38.4, 00:21:57.820 "max_latency_us": 1995.8690909090908 00:21:57.820 } 00:21:57.820 ], 00:21:57.820 "core_count": 1 00:21:57.820 } 00:21:57.820 [2024-11-20 13:42:00.483613] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:57.820 [2024-11-20 13:42:00.483668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:57.820 [2024-11-20 13:42:00.483828] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:57.820 [2024-11-20 13:42:00.483850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:57.820 13:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.820 13:42:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63696 00:21:57.820 13:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63696 ']' 00:21:57.820 13:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63696 00:21:57.820 13:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:21:57.820 13:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:57.820 13:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63696 00:21:57.820 13:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:57.820 killing process with pid 63696 00:21:57.820 13:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:57.820 13:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63696' 00:21:57.820 13:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63696 00:21:57.820 13:42:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63696 00:21:57.820 [2024-11-20 13:42:00.521471] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:57.820 [2024-11-20 13:42:00.649082] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:59.199 13:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:21:59.199 13:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CGiHYsu5nU 00:21:59.199 13:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:21:59.199 13:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:21:59.199 13:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:21:59.199 13:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:59.199 13:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:59.199 13:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:59.199 00:21:59.199 real 0m4.618s 00:21:59.199 user 0m5.705s 00:21:59.199 sys 0m0.623s 00:21:59.199 13:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.199 13:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.199 ************************************ 00:21:59.199 END TEST raid_read_error_test 00:21:59.199 ************************************ 00:21:59.199 13:42:01 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:21:59.199 13:42:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:59.199 13:42:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.199 13:42:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:59.199 ************************************ 00:21:59.199 START TEST raid_write_error_test 00:21:59.199 ************************************ 00:21:59.199 13:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:21:59.199 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:21:59.199 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:21:59.199 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:21:59.199 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:21:59.199 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:59.199 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:21:59.199 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:59.199 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:59.199 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:21:59.199 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:21:59.199 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:21:59.199 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:59.199 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:21:59.200 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:21:59.200 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:21:59.200 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:21:59.200 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:21:59.200 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:21:59.200 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:21:59.200 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:21:59.200 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:21:59.200 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7xfFACsl9j 00:21:59.200 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63846 00:21:59.200 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63846 00:21:59.200 13:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:59.200 13:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63846 ']' 00:21:59.200 13:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.200 13:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.200 13:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.200 13:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.200 13:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.200 [2024-11-20 13:42:01.986309] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:21:59.200 [2024-11-20 13:42:01.986516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63846 ] 00:21:59.459 [2024-11-20 13:42:02.172524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.460 [2024-11-20 13:42:02.309164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.718 [2024-11-20 13:42:02.507881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:59.718 [2024-11-20 13:42:02.507978] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.286 BaseBdev1_malloc 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.286 true 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.286 [2024-11-20 13:42:03.090734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:00.286 [2024-11-20 13:42:03.090829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:00.286 [2024-11-20 13:42:03.090862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:00.286 [2024-11-20 13:42:03.090879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:00.286 [2024-11-20 13:42:03.093767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:00.286 [2024-11-20 13:42:03.093831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:00.286 BaseBdev1 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.286 BaseBdev2_malloc 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.286 true 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.286 [2024-11-20 13:42:03.152311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:00.286 [2024-11-20 13:42:03.152389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:00.286 [2024-11-20 13:42:03.152415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:00.286 [2024-11-20 13:42:03.152432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:00.286 [2024-11-20 13:42:03.155329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:00.286 [2024-11-20 13:42:03.155380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:00.286 BaseBdev2 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.286 [2024-11-20 13:42:03.160424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:00.286 [2024-11-20 13:42:03.163005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:00.286 [2024-11-20 13:42:03.163281] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:00.286 [2024-11-20 13:42:03.163306] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:00.286 [2024-11-20 13:42:03.163632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:00.286 [2024-11-20 13:42:03.163919] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:00.286 [2024-11-20 13:42:03.163946] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:00.286 [2024-11-20 13:42:03.164154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.286 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.545 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:00.545 "name": "raid_bdev1", 00:22:00.545 "uuid": "d726fd8e-5b8f-4817-9823-8c96ac3d07a4", 00:22:00.545 "strip_size_kb": 0, 00:22:00.545 "state": "online", 00:22:00.545 "raid_level": "raid1", 00:22:00.545 "superblock": true, 00:22:00.545 "num_base_bdevs": 2, 00:22:00.545 "num_base_bdevs_discovered": 2, 00:22:00.545 "num_base_bdevs_operational": 2, 00:22:00.545 "base_bdevs_list": [ 00:22:00.545 { 00:22:00.545 "name": "BaseBdev1", 00:22:00.545 "uuid": "d78798d7-7ade-5d6e-811c-9ed9670b5f9a", 00:22:00.545 "is_configured": true, 00:22:00.545 "data_offset": 2048, 00:22:00.545 "data_size": 63488 00:22:00.545 }, 00:22:00.545 { 00:22:00.545 "name": "BaseBdev2", 00:22:00.545 "uuid": "8518ad68-4652-5ac9-9559-947e495cb915", 00:22:00.545 "is_configured": true, 00:22:00.545 "data_offset": 2048, 00:22:00.545 "data_size": 63488 00:22:00.545 } 00:22:00.545 ] 00:22:00.545 }' 00:22:00.545 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:00.545 13:42:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.804 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:22:00.804 13:42:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:22:01.063 [2024-11-20 13:42:03.798177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.998 [2024-11-20 13:42:04.671066] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:22:01.998 [2024-11-20 13:42:04.671144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:01.998 [2024-11-20 13:42:04.671425] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.998 13:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:01.998 "name": "raid_bdev1", 00:22:01.998 "uuid": "d726fd8e-5b8f-4817-9823-8c96ac3d07a4", 00:22:01.998 "strip_size_kb": 0, 00:22:01.998 "state": "online", 00:22:01.998 "raid_level": "raid1", 00:22:01.998 "superblock": true, 00:22:01.999 "num_base_bdevs": 2, 00:22:01.999 "num_base_bdevs_discovered": 1, 00:22:01.999 "num_base_bdevs_operational": 1, 00:22:01.999 "base_bdevs_list": [ 00:22:01.999 { 00:22:01.999 "name": null, 00:22:01.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.999 "is_configured": false, 00:22:01.999 "data_offset": 0, 00:22:01.999 "data_size": 63488 00:22:01.999 }, 00:22:01.999 { 00:22:01.999 "name": "BaseBdev2", 00:22:01.999 "uuid": "8518ad68-4652-5ac9-9559-947e495cb915", 00:22:01.999 "is_configured": true, 00:22:01.999 "data_offset": 2048, 00:22:01.999 "data_size": 63488 00:22:01.999 } 00:22:01.999 ] 00:22:01.999 }' 00:22:01.999 13:42:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:01.999 13:42:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.566 13:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:02.566 13:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.566 13:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.566 [2024-11-20 13:42:05.182975] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:02.566 [2024-11-20 13:42:05.183012] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:02.566 [2024-11-20 13:42:05.186358] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:02.566 [2024-11-20 13:42:05.186411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.566 [2024-11-20 13:42:05.186492] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:02.566 [2024-11-20 13:42:05.186511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:02.566 { 00:22:02.566 "results": [ 00:22:02.566 { 00:22:02.566 "job": "raid_bdev1", 00:22:02.566 "core_mask": "0x1", 00:22:02.566 "workload": "randrw", 00:22:02.566 "percentage": 50, 00:22:02.566 "status": "finished", 00:22:02.566 "queue_depth": 1, 00:22:02.566 "io_size": 131072, 00:22:02.566 "runtime": 1.38197, 00:22:02.566 "iops": 13655.868072389416, 00:22:02.566 "mibps": 1706.983509048677, 00:22:02.566 "io_failed": 0, 00:22:02.566 "io_timeout": 0, 00:22:02.566 "avg_latency_us": 68.92131488689351, 00:22:02.566 "min_latency_us": 38.167272727272724, 00:22:02.566 "max_latency_us": 1824.581818181818 00:22:02.566 } 00:22:02.566 ], 00:22:02.566 "core_count": 1 00:22:02.566 } 00:22:02.566 13:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.566 13:42:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63846 00:22:02.566 13:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63846 ']' 00:22:02.566 13:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63846 00:22:02.566 13:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:22:02.566 13:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:02.566 13:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63846 00:22:02.566 13:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:02.566 13:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:02.566 killing process with pid 63846 00:22:02.566 13:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63846' 00:22:02.566 13:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63846 00:22:02.566 [2024-11-20 13:42:05.220648] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:02.566 13:42:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63846 00:22:02.566 [2024-11-20 13:42:05.348274] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:03.944 13:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7xfFACsl9j 00:22:03.944 13:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:22:03.944 13:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:22:03.944 13:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:22:03.944 13:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:22:03.944 13:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:03.944 13:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:03.944 13:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:22:03.944 00:22:03.944 real 0m4.600s 00:22:03.944 user 0m5.791s 00:22:03.944 sys 0m0.570s 00:22:03.944 13:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:03.944 13:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.944 ************************************ 00:22:03.944 END TEST raid_write_error_test 00:22:03.944 ************************************ 00:22:03.944 13:42:06 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:22:03.944 13:42:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:22:03.944 13:42:06 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:22:03.944 13:42:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:03.944 13:42:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:03.944 13:42:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:03.944 ************************************ 00:22:03.944 START TEST raid_state_function_test 00:22:03.944 ************************************ 00:22:03.944 13:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:22:03.944 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:22:03.944 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:22:03.944 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:22:03.944 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:03.944 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:03.944 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:03.944 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:03.944 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:03.944 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63985 00:22:03.945 Process raid pid: 63985 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63985' 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63985 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63985 ']' 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:03.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.945 13:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:03.945 [2024-11-20 13:42:06.630786] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:22:03.945 [2024-11-20 13:42:06.631032] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.945 [2024-11-20 13:42:06.819849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.204 [2024-11-20 13:42:06.949130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.463 [2024-11-20 13:42:07.170568] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:04.463 [2024-11-20 13:42:07.170631] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:05.069 13:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.069 13:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:22:05.069 13:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:05.069 13:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.069 13:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.069 [2024-11-20 13:42:07.675545] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:05.069 [2024-11-20 13:42:07.675641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:05.069 [2024-11-20 13:42:07.675687] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:05.069 [2024-11-20 13:42:07.675702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:05.069 [2024-11-20 13:42:07.675711] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:05.069 [2024-11-20 13:42:07.675724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:05.069 13:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.069 13:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:05.069 13:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:05.069 13:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:05.069 13:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:05.069 13:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:05.069 13:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:05.069 13:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:05.069 13:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:05.069 13:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:05.069 13:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:05.069 13:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.069 13:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.069 13:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.069 13:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.069 13:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.069 13:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:05.069 "name": "Existed_Raid", 00:22:05.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.069 "strip_size_kb": 64, 00:22:05.069 "state": "configuring", 00:22:05.069 "raid_level": "raid0", 00:22:05.069 "superblock": false, 00:22:05.069 "num_base_bdevs": 3, 00:22:05.069 "num_base_bdevs_discovered": 0, 00:22:05.069 "num_base_bdevs_operational": 3, 00:22:05.069 "base_bdevs_list": [ 00:22:05.069 { 00:22:05.069 "name": "BaseBdev1", 00:22:05.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.069 "is_configured": false, 00:22:05.069 "data_offset": 0, 00:22:05.070 "data_size": 0 00:22:05.070 }, 00:22:05.070 { 00:22:05.070 "name": "BaseBdev2", 00:22:05.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.070 "is_configured": false, 00:22:05.070 "data_offset": 0, 00:22:05.070 "data_size": 0 00:22:05.070 }, 00:22:05.070 { 00:22:05.070 "name": "BaseBdev3", 00:22:05.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.070 "is_configured": false, 00:22:05.070 "data_offset": 0, 00:22:05.070 "data_size": 0 00:22:05.070 } 00:22:05.070 ] 00:22:05.070 }' 00:22:05.070 13:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:05.070 13:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.329 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:05.329 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.329 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.329 [2024-11-20 13:42:08.231745] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:05.329 [2024-11-20 13:42:08.231811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:05.329 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.329 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:05.329 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.329 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.329 [2024-11-20 13:42:08.239707] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:05.329 [2024-11-20 13:42:08.239792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:05.329 [2024-11-20 13:42:08.239806] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:05.329 [2024-11-20 13:42:08.239821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:05.329 [2024-11-20 13:42:08.239830] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:05.329 [2024-11-20 13:42:08.239843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:05.588 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.588 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:05.588 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.588 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.588 [2024-11-20 13:42:08.285024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:05.588 BaseBdev1 00:22:05.588 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.588 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:05.588 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:05.588 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:05.588 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:05.588 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:05.588 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:05.588 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:05.588 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.588 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.588 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.588 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:05.588 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.589 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.589 [ 00:22:05.589 { 00:22:05.589 "name": "BaseBdev1", 00:22:05.589 "aliases": [ 00:22:05.589 "7e49f150-9603-4c56-af54-7d7fa379b7ff" 00:22:05.589 ], 00:22:05.589 "product_name": "Malloc disk", 00:22:05.589 "block_size": 512, 00:22:05.589 "num_blocks": 65536, 00:22:05.589 "uuid": "7e49f150-9603-4c56-af54-7d7fa379b7ff", 00:22:05.589 "assigned_rate_limits": { 00:22:05.589 "rw_ios_per_sec": 0, 00:22:05.589 "rw_mbytes_per_sec": 0, 00:22:05.589 "r_mbytes_per_sec": 0, 00:22:05.589 "w_mbytes_per_sec": 0 00:22:05.589 }, 00:22:05.589 "claimed": true, 00:22:05.589 "claim_type": "exclusive_write", 00:22:05.589 "zoned": false, 00:22:05.589 "supported_io_types": { 00:22:05.589 "read": true, 00:22:05.589 "write": true, 00:22:05.589 "unmap": true, 00:22:05.589 "flush": true, 00:22:05.589 "reset": true, 00:22:05.589 "nvme_admin": false, 00:22:05.589 "nvme_io": false, 00:22:05.589 "nvme_io_md": false, 00:22:05.589 "write_zeroes": true, 00:22:05.589 "zcopy": true, 00:22:05.589 "get_zone_info": false, 00:22:05.589 "zone_management": false, 00:22:05.589 "zone_append": false, 00:22:05.589 "compare": false, 00:22:05.589 "compare_and_write": false, 00:22:05.589 "abort": true, 00:22:05.589 "seek_hole": false, 00:22:05.589 "seek_data": false, 00:22:05.589 "copy": true, 00:22:05.589 "nvme_iov_md": false 00:22:05.589 }, 00:22:05.589 "memory_domains": [ 00:22:05.589 { 00:22:05.589 "dma_device_id": "system", 00:22:05.589 "dma_device_type": 1 00:22:05.589 }, 00:22:05.589 { 00:22:05.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.589 "dma_device_type": 2 00:22:05.589 } 00:22:05.589 ], 00:22:05.589 "driver_specific": {} 00:22:05.589 } 00:22:05.589 ] 00:22:05.589 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.589 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:05.589 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:05.589 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:05.589 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:05.589 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:05.589 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:05.589 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:05.589 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:05.589 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:05.589 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:05.589 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:05.589 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.589 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.589 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.589 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.589 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.589 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:05.589 "name": "Existed_Raid", 00:22:05.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.589 "strip_size_kb": 64, 00:22:05.589 "state": "configuring", 00:22:05.589 "raid_level": "raid0", 00:22:05.589 "superblock": false, 00:22:05.589 "num_base_bdevs": 3, 00:22:05.589 "num_base_bdevs_discovered": 1, 00:22:05.589 "num_base_bdevs_operational": 3, 00:22:05.589 "base_bdevs_list": [ 00:22:05.589 { 00:22:05.589 "name": "BaseBdev1", 00:22:05.589 "uuid": "7e49f150-9603-4c56-af54-7d7fa379b7ff", 00:22:05.589 "is_configured": true, 00:22:05.589 "data_offset": 0, 00:22:05.589 "data_size": 65536 00:22:05.589 }, 00:22:05.589 { 00:22:05.589 "name": "BaseBdev2", 00:22:05.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.589 "is_configured": false, 00:22:05.589 "data_offset": 0, 00:22:05.589 "data_size": 0 00:22:05.589 }, 00:22:05.589 { 00:22:05.589 "name": "BaseBdev3", 00:22:05.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.589 "is_configured": false, 00:22:05.589 "data_offset": 0, 00:22:05.589 "data_size": 0 00:22:05.589 } 00:22:05.589 ] 00:22:05.589 }' 00:22:05.589 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:05.589 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.157 [2024-11-20 13:42:08.853483] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:06.157 [2024-11-20 13:42:08.853563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.157 [2024-11-20 13:42:08.861509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:06.157 [2024-11-20 13:42:08.863992] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:06.157 [2024-11-20 13:42:08.864042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:06.157 [2024-11-20 13:42:08.864058] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:06.157 [2024-11-20 13:42:08.864073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.157 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.158 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:06.158 "name": "Existed_Raid", 00:22:06.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.158 "strip_size_kb": 64, 00:22:06.158 "state": "configuring", 00:22:06.158 "raid_level": "raid0", 00:22:06.158 "superblock": false, 00:22:06.158 "num_base_bdevs": 3, 00:22:06.158 "num_base_bdevs_discovered": 1, 00:22:06.158 "num_base_bdevs_operational": 3, 00:22:06.158 "base_bdevs_list": [ 00:22:06.158 { 00:22:06.158 "name": "BaseBdev1", 00:22:06.158 "uuid": "7e49f150-9603-4c56-af54-7d7fa379b7ff", 00:22:06.158 "is_configured": true, 00:22:06.158 "data_offset": 0, 00:22:06.158 "data_size": 65536 00:22:06.158 }, 00:22:06.158 { 00:22:06.158 "name": "BaseBdev2", 00:22:06.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.158 "is_configured": false, 00:22:06.158 "data_offset": 0, 00:22:06.158 "data_size": 0 00:22:06.158 }, 00:22:06.158 { 00:22:06.158 "name": "BaseBdev3", 00:22:06.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.158 "is_configured": false, 00:22:06.158 "data_offset": 0, 00:22:06.158 "data_size": 0 00:22:06.158 } 00:22:06.158 ] 00:22:06.158 }' 00:22:06.158 13:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:06.158 13:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.733 13:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:06.733 13:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.733 13:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.733 [2024-11-20 13:42:09.438576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:06.733 BaseBdev2 00:22:06.733 13:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.733 13:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:06.733 13:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:06.733 13:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:06.733 13:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:06.733 13:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:06.733 13:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:06.733 13:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:06.733 13:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.733 13:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.733 13:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.733 13:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:06.733 13:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.733 13:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.733 [ 00:22:06.733 { 00:22:06.733 "name": "BaseBdev2", 00:22:06.733 "aliases": [ 00:22:06.733 "5bb7f453-b7b7-4ac2-b321-09f3b62d5304" 00:22:06.733 ], 00:22:06.733 "product_name": "Malloc disk", 00:22:06.733 "block_size": 512, 00:22:06.733 "num_blocks": 65536, 00:22:06.733 "uuid": "5bb7f453-b7b7-4ac2-b321-09f3b62d5304", 00:22:06.733 "assigned_rate_limits": { 00:22:06.733 "rw_ios_per_sec": 0, 00:22:06.733 "rw_mbytes_per_sec": 0, 00:22:06.733 "r_mbytes_per_sec": 0, 00:22:06.733 "w_mbytes_per_sec": 0 00:22:06.733 }, 00:22:06.733 "claimed": true, 00:22:06.733 "claim_type": "exclusive_write", 00:22:06.733 "zoned": false, 00:22:06.733 "supported_io_types": { 00:22:06.733 "read": true, 00:22:06.734 "write": true, 00:22:06.734 "unmap": true, 00:22:06.734 "flush": true, 00:22:06.734 "reset": true, 00:22:06.734 "nvme_admin": false, 00:22:06.734 "nvme_io": false, 00:22:06.734 "nvme_io_md": false, 00:22:06.734 "write_zeroes": true, 00:22:06.734 "zcopy": true, 00:22:06.734 "get_zone_info": false, 00:22:06.734 "zone_management": false, 00:22:06.734 "zone_append": false, 00:22:06.734 "compare": false, 00:22:06.734 "compare_and_write": false, 00:22:06.734 "abort": true, 00:22:06.734 "seek_hole": false, 00:22:06.734 "seek_data": false, 00:22:06.734 "copy": true, 00:22:06.734 "nvme_iov_md": false 00:22:06.734 }, 00:22:06.734 "memory_domains": [ 00:22:06.734 { 00:22:06.734 "dma_device_id": "system", 00:22:06.734 "dma_device_type": 1 00:22:06.734 }, 00:22:06.734 { 00:22:06.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.734 "dma_device_type": 2 00:22:06.734 } 00:22:06.734 ], 00:22:06.734 "driver_specific": {} 00:22:06.734 } 00:22:06.734 ] 00:22:06.734 13:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.734 13:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:06.734 13:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:06.734 13:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:06.734 13:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:06.734 13:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:06.734 13:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:06.734 13:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:06.734 13:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:06.734 13:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:06.734 13:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:06.734 13:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:06.734 13:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:06.734 13:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:06.734 13:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.734 13:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.734 13:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:06.734 13:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.734 13:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.734 13:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:06.734 "name": "Existed_Raid", 00:22:06.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.734 "strip_size_kb": 64, 00:22:06.734 "state": "configuring", 00:22:06.734 "raid_level": "raid0", 00:22:06.734 "superblock": false, 00:22:06.734 "num_base_bdevs": 3, 00:22:06.734 "num_base_bdevs_discovered": 2, 00:22:06.734 "num_base_bdevs_operational": 3, 00:22:06.734 "base_bdevs_list": [ 00:22:06.734 { 00:22:06.734 "name": "BaseBdev1", 00:22:06.734 "uuid": "7e49f150-9603-4c56-af54-7d7fa379b7ff", 00:22:06.734 "is_configured": true, 00:22:06.734 "data_offset": 0, 00:22:06.734 "data_size": 65536 00:22:06.734 }, 00:22:06.734 { 00:22:06.734 "name": "BaseBdev2", 00:22:06.734 "uuid": "5bb7f453-b7b7-4ac2-b321-09f3b62d5304", 00:22:06.734 "is_configured": true, 00:22:06.734 "data_offset": 0, 00:22:06.734 "data_size": 65536 00:22:06.734 }, 00:22:06.734 { 00:22:06.734 "name": "BaseBdev3", 00:22:06.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.734 "is_configured": false, 00:22:06.734 "data_offset": 0, 00:22:06.734 "data_size": 0 00:22:06.734 } 00:22:06.734 ] 00:22:06.734 }' 00:22:06.734 13:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:06.734 13:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.302 13:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:07.302 13:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.302 13:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.302 [2024-11-20 13:42:10.042553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:07.302 [2024-11-20 13:42:10.042628] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:07.302 [2024-11-20 13:42:10.042648] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:22:07.302 [2024-11-20 13:42:10.043063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:07.302 [2024-11-20 13:42:10.043312] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:07.302 [2024-11-20 13:42:10.043339] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:07.302 [2024-11-20 13:42:10.043667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:07.302 BaseBdev3 00:22:07.302 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.302 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:07.302 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:07.302 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:07.302 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:07.302 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:07.302 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:07.302 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:07.302 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.302 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.302 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.302 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:07.302 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.302 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.302 [ 00:22:07.302 { 00:22:07.302 "name": "BaseBdev3", 00:22:07.302 "aliases": [ 00:22:07.302 "f8cf5804-92ed-41c6-ae5f-9024055f4c06" 00:22:07.302 ], 00:22:07.302 "product_name": "Malloc disk", 00:22:07.302 "block_size": 512, 00:22:07.302 "num_blocks": 65536, 00:22:07.302 "uuid": "f8cf5804-92ed-41c6-ae5f-9024055f4c06", 00:22:07.302 "assigned_rate_limits": { 00:22:07.302 "rw_ios_per_sec": 0, 00:22:07.302 "rw_mbytes_per_sec": 0, 00:22:07.302 "r_mbytes_per_sec": 0, 00:22:07.302 "w_mbytes_per_sec": 0 00:22:07.302 }, 00:22:07.302 "claimed": true, 00:22:07.302 "claim_type": "exclusive_write", 00:22:07.302 "zoned": false, 00:22:07.302 "supported_io_types": { 00:22:07.302 "read": true, 00:22:07.303 "write": true, 00:22:07.303 "unmap": true, 00:22:07.303 "flush": true, 00:22:07.303 "reset": true, 00:22:07.303 "nvme_admin": false, 00:22:07.303 "nvme_io": false, 00:22:07.303 "nvme_io_md": false, 00:22:07.303 "write_zeroes": true, 00:22:07.303 "zcopy": true, 00:22:07.303 "get_zone_info": false, 00:22:07.303 "zone_management": false, 00:22:07.303 "zone_append": false, 00:22:07.303 "compare": false, 00:22:07.303 "compare_and_write": false, 00:22:07.303 "abort": true, 00:22:07.303 "seek_hole": false, 00:22:07.303 "seek_data": false, 00:22:07.303 "copy": true, 00:22:07.303 "nvme_iov_md": false 00:22:07.303 }, 00:22:07.303 "memory_domains": [ 00:22:07.303 { 00:22:07.303 "dma_device_id": "system", 00:22:07.303 "dma_device_type": 1 00:22:07.303 }, 00:22:07.303 { 00:22:07.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:07.303 "dma_device_type": 2 00:22:07.303 } 00:22:07.303 ], 00:22:07.303 "driver_specific": {} 00:22:07.303 } 00:22:07.303 ] 00:22:07.303 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.303 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:07.303 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:07.303 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:07.303 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:22:07.303 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:07.303 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:07.303 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:07.303 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:07.303 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:07.303 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.303 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.303 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.303 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.303 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.303 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.303 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.303 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.303 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.303 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.303 "name": "Existed_Raid", 00:22:07.303 "uuid": "658acb16-863b-4130-8e2f-9ec714b0936c", 00:22:07.303 "strip_size_kb": 64, 00:22:07.303 "state": "online", 00:22:07.303 "raid_level": "raid0", 00:22:07.303 "superblock": false, 00:22:07.303 "num_base_bdevs": 3, 00:22:07.303 "num_base_bdevs_discovered": 3, 00:22:07.303 "num_base_bdevs_operational": 3, 00:22:07.303 "base_bdevs_list": [ 00:22:07.303 { 00:22:07.303 "name": "BaseBdev1", 00:22:07.303 "uuid": "7e49f150-9603-4c56-af54-7d7fa379b7ff", 00:22:07.303 "is_configured": true, 00:22:07.303 "data_offset": 0, 00:22:07.303 "data_size": 65536 00:22:07.303 }, 00:22:07.303 { 00:22:07.303 "name": "BaseBdev2", 00:22:07.303 "uuid": "5bb7f453-b7b7-4ac2-b321-09f3b62d5304", 00:22:07.303 "is_configured": true, 00:22:07.303 "data_offset": 0, 00:22:07.303 "data_size": 65536 00:22:07.303 }, 00:22:07.303 { 00:22:07.303 "name": "BaseBdev3", 00:22:07.303 "uuid": "f8cf5804-92ed-41c6-ae5f-9024055f4c06", 00:22:07.303 "is_configured": true, 00:22:07.303 "data_offset": 0, 00:22:07.303 "data_size": 65536 00:22:07.303 } 00:22:07.303 ] 00:22:07.303 }' 00:22:07.303 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.303 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.872 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:07.872 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:07.872 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:07.872 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:07.872 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:07.872 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:07.872 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:07.872 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:07.872 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.872 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.872 [2024-11-20 13:42:10.611370] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:07.872 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.872 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:07.872 "name": "Existed_Raid", 00:22:07.872 "aliases": [ 00:22:07.872 "658acb16-863b-4130-8e2f-9ec714b0936c" 00:22:07.872 ], 00:22:07.872 "product_name": "Raid Volume", 00:22:07.872 "block_size": 512, 00:22:07.872 "num_blocks": 196608, 00:22:07.872 "uuid": "658acb16-863b-4130-8e2f-9ec714b0936c", 00:22:07.872 "assigned_rate_limits": { 00:22:07.872 "rw_ios_per_sec": 0, 00:22:07.872 "rw_mbytes_per_sec": 0, 00:22:07.872 "r_mbytes_per_sec": 0, 00:22:07.872 "w_mbytes_per_sec": 0 00:22:07.872 }, 00:22:07.872 "claimed": false, 00:22:07.872 "zoned": false, 00:22:07.872 "supported_io_types": { 00:22:07.872 "read": true, 00:22:07.872 "write": true, 00:22:07.872 "unmap": true, 00:22:07.872 "flush": true, 00:22:07.872 "reset": true, 00:22:07.872 "nvme_admin": false, 00:22:07.872 "nvme_io": false, 00:22:07.872 "nvme_io_md": false, 00:22:07.872 "write_zeroes": true, 00:22:07.872 "zcopy": false, 00:22:07.872 "get_zone_info": false, 00:22:07.872 "zone_management": false, 00:22:07.872 "zone_append": false, 00:22:07.872 "compare": false, 00:22:07.872 "compare_and_write": false, 00:22:07.872 "abort": false, 00:22:07.872 "seek_hole": false, 00:22:07.872 "seek_data": false, 00:22:07.872 "copy": false, 00:22:07.872 "nvme_iov_md": false 00:22:07.872 }, 00:22:07.872 "memory_domains": [ 00:22:07.872 { 00:22:07.872 "dma_device_id": "system", 00:22:07.872 "dma_device_type": 1 00:22:07.872 }, 00:22:07.872 { 00:22:07.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:07.872 "dma_device_type": 2 00:22:07.872 }, 00:22:07.872 { 00:22:07.872 "dma_device_id": "system", 00:22:07.872 "dma_device_type": 1 00:22:07.872 }, 00:22:07.872 { 00:22:07.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:07.872 "dma_device_type": 2 00:22:07.872 }, 00:22:07.872 { 00:22:07.872 "dma_device_id": "system", 00:22:07.872 "dma_device_type": 1 00:22:07.872 }, 00:22:07.872 { 00:22:07.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:07.872 "dma_device_type": 2 00:22:07.872 } 00:22:07.872 ], 00:22:07.872 "driver_specific": { 00:22:07.872 "raid": { 00:22:07.872 "uuid": "658acb16-863b-4130-8e2f-9ec714b0936c", 00:22:07.872 "strip_size_kb": 64, 00:22:07.872 "state": "online", 00:22:07.872 "raid_level": "raid0", 00:22:07.872 "superblock": false, 00:22:07.872 "num_base_bdevs": 3, 00:22:07.872 "num_base_bdevs_discovered": 3, 00:22:07.872 "num_base_bdevs_operational": 3, 00:22:07.872 "base_bdevs_list": [ 00:22:07.872 { 00:22:07.872 "name": "BaseBdev1", 00:22:07.872 "uuid": "7e49f150-9603-4c56-af54-7d7fa379b7ff", 00:22:07.872 "is_configured": true, 00:22:07.872 "data_offset": 0, 00:22:07.872 "data_size": 65536 00:22:07.872 }, 00:22:07.872 { 00:22:07.872 "name": "BaseBdev2", 00:22:07.872 "uuid": "5bb7f453-b7b7-4ac2-b321-09f3b62d5304", 00:22:07.872 "is_configured": true, 00:22:07.872 "data_offset": 0, 00:22:07.872 "data_size": 65536 00:22:07.872 }, 00:22:07.872 { 00:22:07.872 "name": "BaseBdev3", 00:22:07.872 "uuid": "f8cf5804-92ed-41c6-ae5f-9024055f4c06", 00:22:07.872 "is_configured": true, 00:22:07.872 "data_offset": 0, 00:22:07.872 "data_size": 65536 00:22:07.872 } 00:22:07.872 ] 00:22:07.872 } 00:22:07.872 } 00:22:07.872 }' 00:22:07.872 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:07.872 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:07.872 BaseBdev2 00:22:07.872 BaseBdev3' 00:22:07.872 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:07.872 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:07.872 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:07.872 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:07.872 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.872 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.872 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:07.872 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.131 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:08.131 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:08.131 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:08.131 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:08.131 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:08.131 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.131 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.131 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.131 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:08.131 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:08.131 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:08.131 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:08.131 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:08.131 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.131 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.131 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.131 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:08.131 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:08.131 13:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:08.131 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.131 13:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.131 [2024-11-20 13:42:10.931097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:08.131 [2024-11-20 13:42:10.931138] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:08.131 [2024-11-20 13:42:10.931214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:08.131 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.131 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:08.131 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:22:08.131 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:08.131 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:22:08.131 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:22:08.131 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:22:08.131 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:08.131 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:22:08.131 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:08.131 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:08.131 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:08.131 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.131 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.131 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.131 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.132 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.132 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.132 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.132 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.132 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.390 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.390 "name": "Existed_Raid", 00:22:08.390 "uuid": "658acb16-863b-4130-8e2f-9ec714b0936c", 00:22:08.390 "strip_size_kb": 64, 00:22:08.390 "state": "offline", 00:22:08.390 "raid_level": "raid0", 00:22:08.390 "superblock": false, 00:22:08.390 "num_base_bdevs": 3, 00:22:08.390 "num_base_bdevs_discovered": 2, 00:22:08.390 "num_base_bdevs_operational": 2, 00:22:08.390 "base_bdevs_list": [ 00:22:08.390 { 00:22:08.390 "name": null, 00:22:08.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.390 "is_configured": false, 00:22:08.390 "data_offset": 0, 00:22:08.390 "data_size": 65536 00:22:08.390 }, 00:22:08.390 { 00:22:08.390 "name": "BaseBdev2", 00:22:08.390 "uuid": "5bb7f453-b7b7-4ac2-b321-09f3b62d5304", 00:22:08.390 "is_configured": true, 00:22:08.390 "data_offset": 0, 00:22:08.390 "data_size": 65536 00:22:08.390 }, 00:22:08.390 { 00:22:08.390 "name": "BaseBdev3", 00:22:08.390 "uuid": "f8cf5804-92ed-41c6-ae5f-9024055f4c06", 00:22:08.390 "is_configured": true, 00:22:08.390 "data_offset": 0, 00:22:08.390 "data_size": 65536 00:22:08.390 } 00:22:08.390 ] 00:22:08.390 }' 00:22:08.390 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.390 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.659 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:08.659 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:08.659 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.659 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.659 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.659 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:08.659 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.917 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:08.917 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:08.917 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:08.917 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.917 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.917 [2024-11-20 13:42:11.603085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:08.917 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.917 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:08.917 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:08.917 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.918 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.918 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:08.918 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.918 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.918 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:08.918 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:08.918 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:08.918 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.918 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.918 [2024-11-20 13:42:11.756378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:08.918 [2024-11-20 13:42:11.756453] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.177 BaseBdev2 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.177 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.177 [ 00:22:09.177 { 00:22:09.177 "name": "BaseBdev2", 00:22:09.177 "aliases": [ 00:22:09.177 "b98df895-7170-4aff-8bcd-abb65fdf2261" 00:22:09.177 ], 00:22:09.177 "product_name": "Malloc disk", 00:22:09.177 "block_size": 512, 00:22:09.177 "num_blocks": 65536, 00:22:09.177 "uuid": "b98df895-7170-4aff-8bcd-abb65fdf2261", 00:22:09.177 "assigned_rate_limits": { 00:22:09.177 "rw_ios_per_sec": 0, 00:22:09.177 "rw_mbytes_per_sec": 0, 00:22:09.177 "r_mbytes_per_sec": 0, 00:22:09.177 "w_mbytes_per_sec": 0 00:22:09.177 }, 00:22:09.177 "claimed": false, 00:22:09.177 "zoned": false, 00:22:09.177 "supported_io_types": { 00:22:09.178 "read": true, 00:22:09.178 "write": true, 00:22:09.178 "unmap": true, 00:22:09.178 "flush": true, 00:22:09.178 "reset": true, 00:22:09.178 "nvme_admin": false, 00:22:09.178 "nvme_io": false, 00:22:09.178 "nvme_io_md": false, 00:22:09.178 "write_zeroes": true, 00:22:09.178 "zcopy": true, 00:22:09.178 "get_zone_info": false, 00:22:09.178 "zone_management": false, 00:22:09.178 "zone_append": false, 00:22:09.178 "compare": false, 00:22:09.178 "compare_and_write": false, 00:22:09.178 "abort": true, 00:22:09.178 "seek_hole": false, 00:22:09.178 "seek_data": false, 00:22:09.178 "copy": true, 00:22:09.178 "nvme_iov_md": false 00:22:09.178 }, 00:22:09.178 "memory_domains": [ 00:22:09.178 { 00:22:09.178 "dma_device_id": "system", 00:22:09.178 "dma_device_type": 1 00:22:09.178 }, 00:22:09.178 { 00:22:09.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.178 "dma_device_type": 2 00:22:09.178 } 00:22:09.178 ], 00:22:09.178 "driver_specific": {} 00:22:09.178 } 00:22:09.178 ] 00:22:09.178 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.178 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:09.178 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:09.178 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:09.178 13:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:09.178 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.178 13:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.178 BaseBdev3 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.178 [ 00:22:09.178 { 00:22:09.178 "name": "BaseBdev3", 00:22:09.178 "aliases": [ 00:22:09.178 "013f20dc-6311-4cb3-ac07-8f54b8aa0741" 00:22:09.178 ], 00:22:09.178 "product_name": "Malloc disk", 00:22:09.178 "block_size": 512, 00:22:09.178 "num_blocks": 65536, 00:22:09.178 "uuid": "013f20dc-6311-4cb3-ac07-8f54b8aa0741", 00:22:09.178 "assigned_rate_limits": { 00:22:09.178 "rw_ios_per_sec": 0, 00:22:09.178 "rw_mbytes_per_sec": 0, 00:22:09.178 "r_mbytes_per_sec": 0, 00:22:09.178 "w_mbytes_per_sec": 0 00:22:09.178 }, 00:22:09.178 "claimed": false, 00:22:09.178 "zoned": false, 00:22:09.178 "supported_io_types": { 00:22:09.178 "read": true, 00:22:09.178 "write": true, 00:22:09.178 "unmap": true, 00:22:09.178 "flush": true, 00:22:09.178 "reset": true, 00:22:09.178 "nvme_admin": false, 00:22:09.178 "nvme_io": false, 00:22:09.178 "nvme_io_md": false, 00:22:09.178 "write_zeroes": true, 00:22:09.178 "zcopy": true, 00:22:09.178 "get_zone_info": false, 00:22:09.178 "zone_management": false, 00:22:09.178 "zone_append": false, 00:22:09.178 "compare": false, 00:22:09.178 "compare_and_write": false, 00:22:09.178 "abort": true, 00:22:09.178 "seek_hole": false, 00:22:09.178 "seek_data": false, 00:22:09.178 "copy": true, 00:22:09.178 "nvme_iov_md": false 00:22:09.178 }, 00:22:09.178 "memory_domains": [ 00:22:09.178 { 00:22:09.178 "dma_device_id": "system", 00:22:09.178 "dma_device_type": 1 00:22:09.178 }, 00:22:09.178 { 00:22:09.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.178 "dma_device_type": 2 00:22:09.178 } 00:22:09.178 ], 00:22:09.178 "driver_specific": {} 00:22:09.178 } 00:22:09.178 ] 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.178 [2024-11-20 13:42:12.067682] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:09.178 [2024-11-20 13:42:12.067739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:09.178 [2024-11-20 13:42:12.067772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:09.178 [2024-11-20 13:42:12.070216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.178 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.438 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.438 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.438 "name": "Existed_Raid", 00:22:09.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.438 "strip_size_kb": 64, 00:22:09.438 "state": "configuring", 00:22:09.438 "raid_level": "raid0", 00:22:09.438 "superblock": false, 00:22:09.438 "num_base_bdevs": 3, 00:22:09.438 "num_base_bdevs_discovered": 2, 00:22:09.438 "num_base_bdevs_operational": 3, 00:22:09.438 "base_bdevs_list": [ 00:22:09.438 { 00:22:09.438 "name": "BaseBdev1", 00:22:09.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.438 "is_configured": false, 00:22:09.438 "data_offset": 0, 00:22:09.438 "data_size": 0 00:22:09.438 }, 00:22:09.438 { 00:22:09.438 "name": "BaseBdev2", 00:22:09.438 "uuid": "b98df895-7170-4aff-8bcd-abb65fdf2261", 00:22:09.438 "is_configured": true, 00:22:09.438 "data_offset": 0, 00:22:09.438 "data_size": 65536 00:22:09.438 }, 00:22:09.438 { 00:22:09.438 "name": "BaseBdev3", 00:22:09.438 "uuid": "013f20dc-6311-4cb3-ac07-8f54b8aa0741", 00:22:09.438 "is_configured": true, 00:22:09.438 "data_offset": 0, 00:22:09.438 "data_size": 65536 00:22:09.438 } 00:22:09.438 ] 00:22:09.438 }' 00:22:09.438 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.438 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.006 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:10.006 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.006 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.006 [2024-11-20 13:42:12.635993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:10.006 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.006 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:10.006 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:10.006 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:10.006 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:10.006 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:10.006 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:10.006 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.006 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.006 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.006 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.006 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.006 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.006 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.006 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.006 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.006 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.006 "name": "Existed_Raid", 00:22:10.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.006 "strip_size_kb": 64, 00:22:10.006 "state": "configuring", 00:22:10.006 "raid_level": "raid0", 00:22:10.006 "superblock": false, 00:22:10.006 "num_base_bdevs": 3, 00:22:10.006 "num_base_bdevs_discovered": 1, 00:22:10.006 "num_base_bdevs_operational": 3, 00:22:10.006 "base_bdevs_list": [ 00:22:10.007 { 00:22:10.007 "name": "BaseBdev1", 00:22:10.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.007 "is_configured": false, 00:22:10.007 "data_offset": 0, 00:22:10.007 "data_size": 0 00:22:10.007 }, 00:22:10.007 { 00:22:10.007 "name": null, 00:22:10.007 "uuid": "b98df895-7170-4aff-8bcd-abb65fdf2261", 00:22:10.007 "is_configured": false, 00:22:10.007 "data_offset": 0, 00:22:10.007 "data_size": 65536 00:22:10.007 }, 00:22:10.007 { 00:22:10.007 "name": "BaseBdev3", 00:22:10.007 "uuid": "013f20dc-6311-4cb3-ac07-8f54b8aa0741", 00:22:10.007 "is_configured": true, 00:22:10.007 "data_offset": 0, 00:22:10.007 "data_size": 65536 00:22:10.007 } 00:22:10.007 ] 00:22:10.007 }' 00:22:10.007 13:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.007 13:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.265 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.265 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:10.265 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.265 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.524 [2024-11-20 13:42:13.259386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:10.524 BaseBdev1 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.524 [ 00:22:10.524 { 00:22:10.524 "name": "BaseBdev1", 00:22:10.524 "aliases": [ 00:22:10.524 "6b195cad-06db-4037-af5b-053786882535" 00:22:10.524 ], 00:22:10.524 "product_name": "Malloc disk", 00:22:10.524 "block_size": 512, 00:22:10.524 "num_blocks": 65536, 00:22:10.524 "uuid": "6b195cad-06db-4037-af5b-053786882535", 00:22:10.524 "assigned_rate_limits": { 00:22:10.524 "rw_ios_per_sec": 0, 00:22:10.524 "rw_mbytes_per_sec": 0, 00:22:10.524 "r_mbytes_per_sec": 0, 00:22:10.524 "w_mbytes_per_sec": 0 00:22:10.524 }, 00:22:10.524 "claimed": true, 00:22:10.524 "claim_type": "exclusive_write", 00:22:10.524 "zoned": false, 00:22:10.524 "supported_io_types": { 00:22:10.524 "read": true, 00:22:10.524 "write": true, 00:22:10.524 "unmap": true, 00:22:10.524 "flush": true, 00:22:10.524 "reset": true, 00:22:10.524 "nvme_admin": false, 00:22:10.524 "nvme_io": false, 00:22:10.524 "nvme_io_md": false, 00:22:10.524 "write_zeroes": true, 00:22:10.524 "zcopy": true, 00:22:10.524 "get_zone_info": false, 00:22:10.524 "zone_management": false, 00:22:10.524 "zone_append": false, 00:22:10.524 "compare": false, 00:22:10.524 "compare_and_write": false, 00:22:10.524 "abort": true, 00:22:10.524 "seek_hole": false, 00:22:10.524 "seek_data": false, 00:22:10.524 "copy": true, 00:22:10.524 "nvme_iov_md": false 00:22:10.524 }, 00:22:10.524 "memory_domains": [ 00:22:10.524 { 00:22:10.524 "dma_device_id": "system", 00:22:10.524 "dma_device_type": 1 00:22:10.524 }, 00:22:10.524 { 00:22:10.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.524 "dma_device_type": 2 00:22:10.524 } 00:22:10.524 ], 00:22:10.524 "driver_specific": {} 00:22:10.524 } 00:22:10.524 ] 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.524 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.525 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.525 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.525 "name": "Existed_Raid", 00:22:10.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.525 "strip_size_kb": 64, 00:22:10.525 "state": "configuring", 00:22:10.525 "raid_level": "raid0", 00:22:10.525 "superblock": false, 00:22:10.525 "num_base_bdevs": 3, 00:22:10.525 "num_base_bdevs_discovered": 2, 00:22:10.525 "num_base_bdevs_operational": 3, 00:22:10.525 "base_bdevs_list": [ 00:22:10.525 { 00:22:10.525 "name": "BaseBdev1", 00:22:10.525 "uuid": "6b195cad-06db-4037-af5b-053786882535", 00:22:10.525 "is_configured": true, 00:22:10.525 "data_offset": 0, 00:22:10.525 "data_size": 65536 00:22:10.525 }, 00:22:10.525 { 00:22:10.525 "name": null, 00:22:10.525 "uuid": "b98df895-7170-4aff-8bcd-abb65fdf2261", 00:22:10.525 "is_configured": false, 00:22:10.525 "data_offset": 0, 00:22:10.525 "data_size": 65536 00:22:10.525 }, 00:22:10.525 { 00:22:10.525 "name": "BaseBdev3", 00:22:10.525 "uuid": "013f20dc-6311-4cb3-ac07-8f54b8aa0741", 00:22:10.525 "is_configured": true, 00:22:10.525 "data_offset": 0, 00:22:10.525 "data_size": 65536 00:22:10.525 } 00:22:10.525 ] 00:22:10.525 }' 00:22:10.525 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.525 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.091 [2024-11-20 13:42:13.875625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.091 "name": "Existed_Raid", 00:22:11.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.091 "strip_size_kb": 64, 00:22:11.091 "state": "configuring", 00:22:11.091 "raid_level": "raid0", 00:22:11.091 "superblock": false, 00:22:11.091 "num_base_bdevs": 3, 00:22:11.091 "num_base_bdevs_discovered": 1, 00:22:11.091 "num_base_bdevs_operational": 3, 00:22:11.091 "base_bdevs_list": [ 00:22:11.091 { 00:22:11.091 "name": "BaseBdev1", 00:22:11.091 "uuid": "6b195cad-06db-4037-af5b-053786882535", 00:22:11.091 "is_configured": true, 00:22:11.091 "data_offset": 0, 00:22:11.091 "data_size": 65536 00:22:11.091 }, 00:22:11.091 { 00:22:11.091 "name": null, 00:22:11.091 "uuid": "b98df895-7170-4aff-8bcd-abb65fdf2261", 00:22:11.091 "is_configured": false, 00:22:11.091 "data_offset": 0, 00:22:11.091 "data_size": 65536 00:22:11.091 }, 00:22:11.091 { 00:22:11.091 "name": null, 00:22:11.091 "uuid": "013f20dc-6311-4cb3-ac07-8f54b8aa0741", 00:22:11.091 "is_configured": false, 00:22:11.091 "data_offset": 0, 00:22:11.091 "data_size": 65536 00:22:11.091 } 00:22:11.091 ] 00:22:11.091 }' 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.091 13:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.657 13:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:11.657 13:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.657 13:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.657 13:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.657 13:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.657 13:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:11.657 13:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:11.658 13:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.658 13:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.658 [2024-11-20 13:42:14.431842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:11.658 13:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.658 13:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:11.658 13:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:11.658 13:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:11.658 13:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:11.658 13:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:11.658 13:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:11.658 13:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.658 13:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.658 13:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.658 13:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.658 13:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.658 13:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.658 13:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.658 13:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.658 13:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.658 13:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.658 "name": "Existed_Raid", 00:22:11.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.658 "strip_size_kb": 64, 00:22:11.658 "state": "configuring", 00:22:11.658 "raid_level": "raid0", 00:22:11.658 "superblock": false, 00:22:11.658 "num_base_bdevs": 3, 00:22:11.658 "num_base_bdevs_discovered": 2, 00:22:11.658 "num_base_bdevs_operational": 3, 00:22:11.658 "base_bdevs_list": [ 00:22:11.658 { 00:22:11.658 "name": "BaseBdev1", 00:22:11.658 "uuid": "6b195cad-06db-4037-af5b-053786882535", 00:22:11.658 "is_configured": true, 00:22:11.658 "data_offset": 0, 00:22:11.658 "data_size": 65536 00:22:11.658 }, 00:22:11.658 { 00:22:11.658 "name": null, 00:22:11.658 "uuid": "b98df895-7170-4aff-8bcd-abb65fdf2261", 00:22:11.658 "is_configured": false, 00:22:11.658 "data_offset": 0, 00:22:11.658 "data_size": 65536 00:22:11.658 }, 00:22:11.658 { 00:22:11.658 "name": "BaseBdev3", 00:22:11.658 "uuid": "013f20dc-6311-4cb3-ac07-8f54b8aa0741", 00:22:11.658 "is_configured": true, 00:22:11.658 "data_offset": 0, 00:22:11.658 "data_size": 65536 00:22:11.658 } 00:22:11.658 ] 00:22:11.658 }' 00:22:11.658 13:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.658 13:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.265 13:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.265 13:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:12.265 13:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.265 13:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.265 13:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.265 13:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:12.265 13:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:12.265 13:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.265 13:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.265 [2024-11-20 13:42:15.004043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:12.265 13:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.265 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:12.265 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:12.265 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:12.265 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:12.265 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:12.265 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:12.265 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.265 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.265 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.265 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.265 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.265 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.265 13:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.265 13:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.265 13:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.265 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.265 "name": "Existed_Raid", 00:22:12.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.265 "strip_size_kb": 64, 00:22:12.265 "state": "configuring", 00:22:12.265 "raid_level": "raid0", 00:22:12.265 "superblock": false, 00:22:12.265 "num_base_bdevs": 3, 00:22:12.265 "num_base_bdevs_discovered": 1, 00:22:12.266 "num_base_bdevs_operational": 3, 00:22:12.266 "base_bdevs_list": [ 00:22:12.266 { 00:22:12.266 "name": null, 00:22:12.266 "uuid": "6b195cad-06db-4037-af5b-053786882535", 00:22:12.266 "is_configured": false, 00:22:12.266 "data_offset": 0, 00:22:12.266 "data_size": 65536 00:22:12.266 }, 00:22:12.266 { 00:22:12.266 "name": null, 00:22:12.266 "uuid": "b98df895-7170-4aff-8bcd-abb65fdf2261", 00:22:12.266 "is_configured": false, 00:22:12.266 "data_offset": 0, 00:22:12.266 "data_size": 65536 00:22:12.266 }, 00:22:12.266 { 00:22:12.266 "name": "BaseBdev3", 00:22:12.266 "uuid": "013f20dc-6311-4cb3-ac07-8f54b8aa0741", 00:22:12.266 "is_configured": true, 00:22:12.266 "data_offset": 0, 00:22:12.266 "data_size": 65536 00:22:12.266 } 00:22:12.266 ] 00:22:12.266 }' 00:22:12.266 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.266 13:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.834 [2024-11-20 13:42:15.659270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.834 "name": "Existed_Raid", 00:22:12.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.834 "strip_size_kb": 64, 00:22:12.834 "state": "configuring", 00:22:12.834 "raid_level": "raid0", 00:22:12.834 "superblock": false, 00:22:12.834 "num_base_bdevs": 3, 00:22:12.834 "num_base_bdevs_discovered": 2, 00:22:12.834 "num_base_bdevs_operational": 3, 00:22:12.834 "base_bdevs_list": [ 00:22:12.834 { 00:22:12.834 "name": null, 00:22:12.834 "uuid": "6b195cad-06db-4037-af5b-053786882535", 00:22:12.834 "is_configured": false, 00:22:12.834 "data_offset": 0, 00:22:12.834 "data_size": 65536 00:22:12.834 }, 00:22:12.834 { 00:22:12.834 "name": "BaseBdev2", 00:22:12.834 "uuid": "b98df895-7170-4aff-8bcd-abb65fdf2261", 00:22:12.834 "is_configured": true, 00:22:12.834 "data_offset": 0, 00:22:12.834 "data_size": 65536 00:22:12.834 }, 00:22:12.834 { 00:22:12.834 "name": "BaseBdev3", 00:22:12.834 "uuid": "013f20dc-6311-4cb3-ac07-8f54b8aa0741", 00:22:12.834 "is_configured": true, 00:22:12.834 "data_offset": 0, 00:22:12.834 "data_size": 65536 00:22:12.834 } 00:22:12.834 ] 00:22:12.834 }' 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.834 13:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.402 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.402 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:13.402 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.402 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.402 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.402 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:13.402 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.402 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.402 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:13.402 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.402 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6b195cad-06db-4037-af5b-053786882535 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.661 [2024-11-20 13:42:16.357559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:13.661 [2024-11-20 13:42:16.357837] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:13.661 [2024-11-20 13:42:16.357868] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:22:13.661 [2024-11-20 13:42:16.358217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:13.661 [2024-11-20 13:42:16.358421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:13.661 [2024-11-20 13:42:16.358439] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:13.661 [2024-11-20 13:42:16.358749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.661 NewBaseBdev 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.661 [ 00:22:13.661 { 00:22:13.661 "name": "NewBaseBdev", 00:22:13.661 "aliases": [ 00:22:13.661 "6b195cad-06db-4037-af5b-053786882535" 00:22:13.661 ], 00:22:13.661 "product_name": "Malloc disk", 00:22:13.661 "block_size": 512, 00:22:13.661 "num_blocks": 65536, 00:22:13.661 "uuid": "6b195cad-06db-4037-af5b-053786882535", 00:22:13.661 "assigned_rate_limits": { 00:22:13.661 "rw_ios_per_sec": 0, 00:22:13.661 "rw_mbytes_per_sec": 0, 00:22:13.661 "r_mbytes_per_sec": 0, 00:22:13.661 "w_mbytes_per_sec": 0 00:22:13.661 }, 00:22:13.661 "claimed": true, 00:22:13.661 "claim_type": "exclusive_write", 00:22:13.661 "zoned": false, 00:22:13.661 "supported_io_types": { 00:22:13.661 "read": true, 00:22:13.661 "write": true, 00:22:13.661 "unmap": true, 00:22:13.661 "flush": true, 00:22:13.661 "reset": true, 00:22:13.661 "nvme_admin": false, 00:22:13.661 "nvme_io": false, 00:22:13.661 "nvme_io_md": false, 00:22:13.661 "write_zeroes": true, 00:22:13.661 "zcopy": true, 00:22:13.661 "get_zone_info": false, 00:22:13.661 "zone_management": false, 00:22:13.661 "zone_append": false, 00:22:13.661 "compare": false, 00:22:13.661 "compare_and_write": false, 00:22:13.661 "abort": true, 00:22:13.661 "seek_hole": false, 00:22:13.661 "seek_data": false, 00:22:13.661 "copy": true, 00:22:13.661 "nvme_iov_md": false 00:22:13.661 }, 00:22:13.661 "memory_domains": [ 00:22:13.661 { 00:22:13.661 "dma_device_id": "system", 00:22:13.661 "dma_device_type": 1 00:22:13.661 }, 00:22:13.661 { 00:22:13.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.661 "dma_device_type": 2 00:22:13.661 } 00:22:13.661 ], 00:22:13.661 "driver_specific": {} 00:22:13.661 } 00:22:13.661 ] 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.661 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.661 "name": "Existed_Raid", 00:22:13.661 "uuid": "ce220e62-7216-4797-a8a8-4d7cfde2af14", 00:22:13.661 "strip_size_kb": 64, 00:22:13.661 "state": "online", 00:22:13.661 "raid_level": "raid0", 00:22:13.661 "superblock": false, 00:22:13.661 "num_base_bdevs": 3, 00:22:13.661 "num_base_bdevs_discovered": 3, 00:22:13.661 "num_base_bdevs_operational": 3, 00:22:13.661 "base_bdevs_list": [ 00:22:13.661 { 00:22:13.661 "name": "NewBaseBdev", 00:22:13.661 "uuid": "6b195cad-06db-4037-af5b-053786882535", 00:22:13.661 "is_configured": true, 00:22:13.661 "data_offset": 0, 00:22:13.662 "data_size": 65536 00:22:13.662 }, 00:22:13.662 { 00:22:13.662 "name": "BaseBdev2", 00:22:13.662 "uuid": "b98df895-7170-4aff-8bcd-abb65fdf2261", 00:22:13.662 "is_configured": true, 00:22:13.662 "data_offset": 0, 00:22:13.662 "data_size": 65536 00:22:13.662 }, 00:22:13.662 { 00:22:13.662 "name": "BaseBdev3", 00:22:13.662 "uuid": "013f20dc-6311-4cb3-ac07-8f54b8aa0741", 00:22:13.662 "is_configured": true, 00:22:13.662 "data_offset": 0, 00:22:13.662 "data_size": 65536 00:22:13.662 } 00:22:13.662 ] 00:22:13.662 }' 00:22:13.662 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.662 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.229 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:14.229 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:14.229 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:14.229 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:14.229 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:14.229 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:14.229 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:14.229 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:14.229 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.229 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.229 [2024-11-20 13:42:16.898161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:14.229 13:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.229 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:14.229 "name": "Existed_Raid", 00:22:14.229 "aliases": [ 00:22:14.229 "ce220e62-7216-4797-a8a8-4d7cfde2af14" 00:22:14.229 ], 00:22:14.229 "product_name": "Raid Volume", 00:22:14.229 "block_size": 512, 00:22:14.229 "num_blocks": 196608, 00:22:14.229 "uuid": "ce220e62-7216-4797-a8a8-4d7cfde2af14", 00:22:14.229 "assigned_rate_limits": { 00:22:14.229 "rw_ios_per_sec": 0, 00:22:14.229 "rw_mbytes_per_sec": 0, 00:22:14.229 "r_mbytes_per_sec": 0, 00:22:14.229 "w_mbytes_per_sec": 0 00:22:14.229 }, 00:22:14.229 "claimed": false, 00:22:14.229 "zoned": false, 00:22:14.229 "supported_io_types": { 00:22:14.229 "read": true, 00:22:14.229 "write": true, 00:22:14.229 "unmap": true, 00:22:14.229 "flush": true, 00:22:14.229 "reset": true, 00:22:14.229 "nvme_admin": false, 00:22:14.229 "nvme_io": false, 00:22:14.229 "nvme_io_md": false, 00:22:14.229 "write_zeroes": true, 00:22:14.229 "zcopy": false, 00:22:14.229 "get_zone_info": false, 00:22:14.229 "zone_management": false, 00:22:14.229 "zone_append": false, 00:22:14.229 "compare": false, 00:22:14.229 "compare_and_write": false, 00:22:14.229 "abort": false, 00:22:14.229 "seek_hole": false, 00:22:14.229 "seek_data": false, 00:22:14.229 "copy": false, 00:22:14.229 "nvme_iov_md": false 00:22:14.229 }, 00:22:14.229 "memory_domains": [ 00:22:14.229 { 00:22:14.229 "dma_device_id": "system", 00:22:14.229 "dma_device_type": 1 00:22:14.229 }, 00:22:14.229 { 00:22:14.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.229 "dma_device_type": 2 00:22:14.229 }, 00:22:14.229 { 00:22:14.229 "dma_device_id": "system", 00:22:14.229 "dma_device_type": 1 00:22:14.229 }, 00:22:14.229 { 00:22:14.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.229 "dma_device_type": 2 00:22:14.229 }, 00:22:14.229 { 00:22:14.229 "dma_device_id": "system", 00:22:14.229 "dma_device_type": 1 00:22:14.229 }, 00:22:14.229 { 00:22:14.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.229 "dma_device_type": 2 00:22:14.229 } 00:22:14.229 ], 00:22:14.229 "driver_specific": { 00:22:14.229 "raid": { 00:22:14.229 "uuid": "ce220e62-7216-4797-a8a8-4d7cfde2af14", 00:22:14.229 "strip_size_kb": 64, 00:22:14.229 "state": "online", 00:22:14.229 "raid_level": "raid0", 00:22:14.229 "superblock": false, 00:22:14.229 "num_base_bdevs": 3, 00:22:14.229 "num_base_bdevs_discovered": 3, 00:22:14.229 "num_base_bdevs_operational": 3, 00:22:14.229 "base_bdevs_list": [ 00:22:14.229 { 00:22:14.229 "name": "NewBaseBdev", 00:22:14.229 "uuid": "6b195cad-06db-4037-af5b-053786882535", 00:22:14.229 "is_configured": true, 00:22:14.229 "data_offset": 0, 00:22:14.229 "data_size": 65536 00:22:14.229 }, 00:22:14.229 { 00:22:14.229 "name": "BaseBdev2", 00:22:14.229 "uuid": "b98df895-7170-4aff-8bcd-abb65fdf2261", 00:22:14.229 "is_configured": true, 00:22:14.229 "data_offset": 0, 00:22:14.229 "data_size": 65536 00:22:14.229 }, 00:22:14.229 { 00:22:14.229 "name": "BaseBdev3", 00:22:14.229 "uuid": "013f20dc-6311-4cb3-ac07-8f54b8aa0741", 00:22:14.229 "is_configured": true, 00:22:14.229 "data_offset": 0, 00:22:14.229 "data_size": 65536 00:22:14.229 } 00:22:14.229 ] 00:22:14.229 } 00:22:14.229 } 00:22:14.229 }' 00:22:14.229 13:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:14.229 13:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:14.229 BaseBdev2 00:22:14.229 BaseBdev3' 00:22:14.229 13:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.229 13:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:14.230 13:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.230 13:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.230 13:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:14.230 13:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.230 13:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.230 13:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.230 13:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:14.230 13:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:14.230 13:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.230 13:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:14.230 13:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.230 13:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.230 13:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.230 13:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.489 [2024-11-20 13:42:17.213864] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:14.489 [2024-11-20 13:42:17.213924] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:14.489 [2024-11-20 13:42:17.214049] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:14.489 [2024-11-20 13:42:17.214128] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:14.489 [2024-11-20 13:42:17.214149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63985 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63985 ']' 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63985 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63985 00:22:14.489 killing process with pid 63985 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63985' 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63985 00:22:14.489 [2024-11-20 13:42:17.252069] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:14.489 13:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63985 00:22:14.747 [2024-11-20 13:42:17.522447] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:15.683 ************************************ 00:22:15.683 END TEST raid_state_function_test 00:22:15.683 ************************************ 00:22:15.683 13:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:22:15.683 00:22:15.683 real 0m12.079s 00:22:15.683 user 0m20.042s 00:22:15.683 sys 0m1.694s 00:22:15.683 13:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:15.683 13:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.947 13:42:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:22:15.947 13:42:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:15.947 13:42:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:15.947 13:42:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:15.947 ************************************ 00:22:15.947 START TEST raid_state_function_test_sb 00:22:15.947 ************************************ 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:15.947 Process raid pid: 64623 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64623 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64623' 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:15.947 13:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64623 00:22:15.948 13:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64623 ']' 00:22:15.948 13:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.948 13:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:15.948 13:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.948 13:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:15.948 13:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.948 [2024-11-20 13:42:18.787246] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:22:15.948 [2024-11-20 13:42:18.787683] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.205 [2024-11-20 13:42:18.978721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.537 [2024-11-20 13:42:19.153665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.537 [2024-11-20 13:42:19.377091] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:16.537 [2024-11-20 13:42:19.377153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:17.104 13:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:17.104 13:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:22:17.104 13:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:17.104 13:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.104 13:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.104 [2024-11-20 13:42:19.845587] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:17.104 [2024-11-20 13:42:19.845654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:17.104 [2024-11-20 13:42:19.845672] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:17.104 [2024-11-20 13:42:19.845688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:17.104 [2024-11-20 13:42:19.845698] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:17.104 [2024-11-20 13:42:19.845715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:17.104 13:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.104 13:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:17.104 13:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:17.104 13:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:17.104 13:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:17.104 13:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:17.104 13:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:17.104 13:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.104 13:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.104 13:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.104 13:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.104 13:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.104 13:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.104 13:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:17.104 13:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.104 13:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.104 13:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.104 "name": "Existed_Raid", 00:22:17.104 "uuid": "0ebda843-8a94-434b-af80-3db9ef4e8d86", 00:22:17.104 "strip_size_kb": 64, 00:22:17.104 "state": "configuring", 00:22:17.104 "raid_level": "raid0", 00:22:17.104 "superblock": true, 00:22:17.105 "num_base_bdevs": 3, 00:22:17.105 "num_base_bdevs_discovered": 0, 00:22:17.105 "num_base_bdevs_operational": 3, 00:22:17.105 "base_bdevs_list": [ 00:22:17.105 { 00:22:17.105 "name": "BaseBdev1", 00:22:17.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.105 "is_configured": false, 00:22:17.105 "data_offset": 0, 00:22:17.105 "data_size": 0 00:22:17.105 }, 00:22:17.105 { 00:22:17.105 "name": "BaseBdev2", 00:22:17.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.105 "is_configured": false, 00:22:17.105 "data_offset": 0, 00:22:17.105 "data_size": 0 00:22:17.105 }, 00:22:17.105 { 00:22:17.105 "name": "BaseBdev3", 00:22:17.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.105 "is_configured": false, 00:22:17.105 "data_offset": 0, 00:22:17.105 "data_size": 0 00:22:17.105 } 00:22:17.105 ] 00:22:17.105 }' 00:22:17.105 13:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.105 13:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.672 [2024-11-20 13:42:20.362310] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:17.672 [2024-11-20 13:42:20.362526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.672 [2024-11-20 13:42:20.370257] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:17.672 [2024-11-20 13:42:20.370317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:17.672 [2024-11-20 13:42:20.370334] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:17.672 [2024-11-20 13:42:20.370350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:17.672 [2024-11-20 13:42:20.370359] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:17.672 [2024-11-20 13:42:20.370373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.672 [2024-11-20 13:42:20.417062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:17.672 BaseBdev1 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.672 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.672 [ 00:22:17.672 { 00:22:17.672 "name": "BaseBdev1", 00:22:17.672 "aliases": [ 00:22:17.672 "92441045-5d75-40f3-a15d-e687005a08da" 00:22:17.672 ], 00:22:17.672 "product_name": "Malloc disk", 00:22:17.672 "block_size": 512, 00:22:17.672 "num_blocks": 65536, 00:22:17.672 "uuid": "92441045-5d75-40f3-a15d-e687005a08da", 00:22:17.672 "assigned_rate_limits": { 00:22:17.672 "rw_ios_per_sec": 0, 00:22:17.672 "rw_mbytes_per_sec": 0, 00:22:17.672 "r_mbytes_per_sec": 0, 00:22:17.672 "w_mbytes_per_sec": 0 00:22:17.672 }, 00:22:17.672 "claimed": true, 00:22:17.672 "claim_type": "exclusive_write", 00:22:17.672 "zoned": false, 00:22:17.672 "supported_io_types": { 00:22:17.672 "read": true, 00:22:17.672 "write": true, 00:22:17.672 "unmap": true, 00:22:17.672 "flush": true, 00:22:17.672 "reset": true, 00:22:17.672 "nvme_admin": false, 00:22:17.672 "nvme_io": false, 00:22:17.672 "nvme_io_md": false, 00:22:17.672 "write_zeroes": true, 00:22:17.672 "zcopy": true, 00:22:17.672 "get_zone_info": false, 00:22:17.672 "zone_management": false, 00:22:17.672 "zone_append": false, 00:22:17.672 "compare": false, 00:22:17.672 "compare_and_write": false, 00:22:17.672 "abort": true, 00:22:17.672 "seek_hole": false, 00:22:17.672 "seek_data": false, 00:22:17.672 "copy": true, 00:22:17.672 "nvme_iov_md": false 00:22:17.672 }, 00:22:17.672 "memory_domains": [ 00:22:17.672 { 00:22:17.672 "dma_device_id": "system", 00:22:17.672 "dma_device_type": 1 00:22:17.672 }, 00:22:17.672 { 00:22:17.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.672 "dma_device_type": 2 00:22:17.672 } 00:22:17.672 ], 00:22:17.672 "driver_specific": {} 00:22:17.672 } 00:22:17.672 ] 00:22:17.673 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.673 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:17.673 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:17.673 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:17.673 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:17.673 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:17.673 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:17.673 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:17.673 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.673 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.673 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.673 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.673 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.673 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.673 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:17.673 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.673 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.673 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.673 "name": "Existed_Raid", 00:22:17.673 "uuid": "9de63ad5-34cb-4f84-9c96-ad14178b0d69", 00:22:17.673 "strip_size_kb": 64, 00:22:17.673 "state": "configuring", 00:22:17.673 "raid_level": "raid0", 00:22:17.673 "superblock": true, 00:22:17.673 "num_base_bdevs": 3, 00:22:17.673 "num_base_bdevs_discovered": 1, 00:22:17.673 "num_base_bdevs_operational": 3, 00:22:17.673 "base_bdevs_list": [ 00:22:17.673 { 00:22:17.673 "name": "BaseBdev1", 00:22:17.673 "uuid": "92441045-5d75-40f3-a15d-e687005a08da", 00:22:17.673 "is_configured": true, 00:22:17.673 "data_offset": 2048, 00:22:17.673 "data_size": 63488 00:22:17.673 }, 00:22:17.673 { 00:22:17.673 "name": "BaseBdev2", 00:22:17.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.673 "is_configured": false, 00:22:17.673 "data_offset": 0, 00:22:17.673 "data_size": 0 00:22:17.673 }, 00:22:17.673 { 00:22:17.673 "name": "BaseBdev3", 00:22:17.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.673 "is_configured": false, 00:22:17.673 "data_offset": 0, 00:22:17.673 "data_size": 0 00:22:17.673 } 00:22:17.673 ] 00:22:17.673 }' 00:22:17.673 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.673 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.241 [2024-11-20 13:42:20.925255] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:18.241 [2024-11-20 13:42:20.925324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.241 [2024-11-20 13:42:20.933338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:18.241 [2024-11-20 13:42:20.935839] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:18.241 [2024-11-20 13:42:20.936067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:18.241 [2024-11-20 13:42:20.936132] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:18.241 [2024-11-20 13:42:20.936290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:18.241 "name": "Existed_Raid", 00:22:18.241 "uuid": "33cf9dbd-3b9a-490e-a1ab-36e6e76a2e04", 00:22:18.241 "strip_size_kb": 64, 00:22:18.241 "state": "configuring", 00:22:18.241 "raid_level": "raid0", 00:22:18.241 "superblock": true, 00:22:18.241 "num_base_bdevs": 3, 00:22:18.241 "num_base_bdevs_discovered": 1, 00:22:18.241 "num_base_bdevs_operational": 3, 00:22:18.241 "base_bdevs_list": [ 00:22:18.241 { 00:22:18.241 "name": "BaseBdev1", 00:22:18.241 "uuid": "92441045-5d75-40f3-a15d-e687005a08da", 00:22:18.241 "is_configured": true, 00:22:18.241 "data_offset": 2048, 00:22:18.241 "data_size": 63488 00:22:18.241 }, 00:22:18.241 { 00:22:18.241 "name": "BaseBdev2", 00:22:18.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.241 "is_configured": false, 00:22:18.241 "data_offset": 0, 00:22:18.241 "data_size": 0 00:22:18.241 }, 00:22:18.241 { 00:22:18.241 "name": "BaseBdev3", 00:22:18.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.241 "is_configured": false, 00:22:18.241 "data_offset": 0, 00:22:18.241 "data_size": 0 00:22:18.241 } 00:22:18.241 ] 00:22:18.241 }' 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:18.241 13:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.807 13:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:18.807 13:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.807 13:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.807 [2024-11-20 13:42:21.504508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:18.807 BaseBdev2 00:22:18.807 13:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.807 13:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:18.807 13:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:18.807 13:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:18.807 13:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:18.807 13:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:18.807 13:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:18.807 13:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:18.807 13:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.807 13:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.807 13:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.807 13:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:18.807 13:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.807 13:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.807 [ 00:22:18.807 { 00:22:18.807 "name": "BaseBdev2", 00:22:18.807 "aliases": [ 00:22:18.807 "6c47e7cb-eee3-47b3-aed6-b462908dc281" 00:22:18.807 ], 00:22:18.807 "product_name": "Malloc disk", 00:22:18.807 "block_size": 512, 00:22:18.807 "num_blocks": 65536, 00:22:18.807 "uuid": "6c47e7cb-eee3-47b3-aed6-b462908dc281", 00:22:18.808 "assigned_rate_limits": { 00:22:18.808 "rw_ios_per_sec": 0, 00:22:18.808 "rw_mbytes_per_sec": 0, 00:22:18.808 "r_mbytes_per_sec": 0, 00:22:18.808 "w_mbytes_per_sec": 0 00:22:18.808 }, 00:22:18.808 "claimed": true, 00:22:18.808 "claim_type": "exclusive_write", 00:22:18.808 "zoned": false, 00:22:18.808 "supported_io_types": { 00:22:18.808 "read": true, 00:22:18.808 "write": true, 00:22:18.808 "unmap": true, 00:22:18.808 "flush": true, 00:22:18.808 "reset": true, 00:22:18.808 "nvme_admin": false, 00:22:18.808 "nvme_io": false, 00:22:18.808 "nvme_io_md": false, 00:22:18.808 "write_zeroes": true, 00:22:18.808 "zcopy": true, 00:22:18.808 "get_zone_info": false, 00:22:18.808 "zone_management": false, 00:22:18.808 "zone_append": false, 00:22:18.808 "compare": false, 00:22:18.808 "compare_and_write": false, 00:22:18.808 "abort": true, 00:22:18.808 "seek_hole": false, 00:22:18.808 "seek_data": false, 00:22:18.808 "copy": true, 00:22:18.808 "nvme_iov_md": false 00:22:18.808 }, 00:22:18.808 "memory_domains": [ 00:22:18.808 { 00:22:18.808 "dma_device_id": "system", 00:22:18.808 "dma_device_type": 1 00:22:18.808 }, 00:22:18.808 { 00:22:18.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.808 "dma_device_type": 2 00:22:18.808 } 00:22:18.808 ], 00:22:18.808 "driver_specific": {} 00:22:18.808 } 00:22:18.808 ] 00:22:18.808 13:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.808 13:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:18.808 13:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:18.808 13:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:18.808 13:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:18.808 13:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:18.808 13:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:18.808 13:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:18.808 13:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:18.808 13:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:18.808 13:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:18.808 13:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:18.808 13:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:18.808 13:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:18.808 13:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.808 13:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.808 13:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:18.808 13:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.808 13:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.808 13:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:18.808 "name": "Existed_Raid", 00:22:18.808 "uuid": "33cf9dbd-3b9a-490e-a1ab-36e6e76a2e04", 00:22:18.808 "strip_size_kb": 64, 00:22:18.808 "state": "configuring", 00:22:18.808 "raid_level": "raid0", 00:22:18.808 "superblock": true, 00:22:18.808 "num_base_bdevs": 3, 00:22:18.808 "num_base_bdevs_discovered": 2, 00:22:18.808 "num_base_bdevs_operational": 3, 00:22:18.808 "base_bdevs_list": [ 00:22:18.808 { 00:22:18.808 "name": "BaseBdev1", 00:22:18.808 "uuid": "92441045-5d75-40f3-a15d-e687005a08da", 00:22:18.808 "is_configured": true, 00:22:18.808 "data_offset": 2048, 00:22:18.808 "data_size": 63488 00:22:18.808 }, 00:22:18.808 { 00:22:18.808 "name": "BaseBdev2", 00:22:18.808 "uuid": "6c47e7cb-eee3-47b3-aed6-b462908dc281", 00:22:18.808 "is_configured": true, 00:22:18.808 "data_offset": 2048, 00:22:18.808 "data_size": 63488 00:22:18.808 }, 00:22:18.808 { 00:22:18.808 "name": "BaseBdev3", 00:22:18.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.808 "is_configured": false, 00:22:18.808 "data_offset": 0, 00:22:18.808 "data_size": 0 00:22:18.808 } 00:22:18.808 ] 00:22:18.808 }' 00:22:18.808 13:42:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:18.808 13:42:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.373 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:19.373 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.373 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.373 [2024-11-20 13:42:22.102449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:19.374 [2024-11-20 13:42:22.102792] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:19.374 [2024-11-20 13:42:22.102822] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:19.374 BaseBdev3 00:22:19.374 [2024-11-20 13:42:22.103224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:19.374 [2024-11-20 13:42:22.103434] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:19.374 [2024-11-20 13:42:22.103452] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:19.374 [2024-11-20 13:42:22.103632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.374 [ 00:22:19.374 { 00:22:19.374 "name": "BaseBdev3", 00:22:19.374 "aliases": [ 00:22:19.374 "1fe1091d-d585-4660-a899-ce78e5d9f55a" 00:22:19.374 ], 00:22:19.374 "product_name": "Malloc disk", 00:22:19.374 "block_size": 512, 00:22:19.374 "num_blocks": 65536, 00:22:19.374 "uuid": "1fe1091d-d585-4660-a899-ce78e5d9f55a", 00:22:19.374 "assigned_rate_limits": { 00:22:19.374 "rw_ios_per_sec": 0, 00:22:19.374 "rw_mbytes_per_sec": 0, 00:22:19.374 "r_mbytes_per_sec": 0, 00:22:19.374 "w_mbytes_per_sec": 0 00:22:19.374 }, 00:22:19.374 "claimed": true, 00:22:19.374 "claim_type": "exclusive_write", 00:22:19.374 "zoned": false, 00:22:19.374 "supported_io_types": { 00:22:19.374 "read": true, 00:22:19.374 "write": true, 00:22:19.374 "unmap": true, 00:22:19.374 "flush": true, 00:22:19.374 "reset": true, 00:22:19.374 "nvme_admin": false, 00:22:19.374 "nvme_io": false, 00:22:19.374 "nvme_io_md": false, 00:22:19.374 "write_zeroes": true, 00:22:19.374 "zcopy": true, 00:22:19.374 "get_zone_info": false, 00:22:19.374 "zone_management": false, 00:22:19.374 "zone_append": false, 00:22:19.374 "compare": false, 00:22:19.374 "compare_and_write": false, 00:22:19.374 "abort": true, 00:22:19.374 "seek_hole": false, 00:22:19.374 "seek_data": false, 00:22:19.374 "copy": true, 00:22:19.374 "nvme_iov_md": false 00:22:19.374 }, 00:22:19.374 "memory_domains": [ 00:22:19.374 { 00:22:19.374 "dma_device_id": "system", 00:22:19.374 "dma_device_type": 1 00:22:19.374 }, 00:22:19.374 { 00:22:19.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.374 "dma_device_type": 2 00:22:19.374 } 00:22:19.374 ], 00:22:19.374 "driver_specific": {} 00:22:19.374 } 00:22:19.374 ] 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:19.374 "name": "Existed_Raid", 00:22:19.374 "uuid": "33cf9dbd-3b9a-490e-a1ab-36e6e76a2e04", 00:22:19.374 "strip_size_kb": 64, 00:22:19.374 "state": "online", 00:22:19.374 "raid_level": "raid0", 00:22:19.374 "superblock": true, 00:22:19.374 "num_base_bdevs": 3, 00:22:19.374 "num_base_bdevs_discovered": 3, 00:22:19.374 "num_base_bdevs_operational": 3, 00:22:19.374 "base_bdevs_list": [ 00:22:19.374 { 00:22:19.374 "name": "BaseBdev1", 00:22:19.374 "uuid": "92441045-5d75-40f3-a15d-e687005a08da", 00:22:19.374 "is_configured": true, 00:22:19.374 "data_offset": 2048, 00:22:19.374 "data_size": 63488 00:22:19.374 }, 00:22:19.374 { 00:22:19.374 "name": "BaseBdev2", 00:22:19.374 "uuid": "6c47e7cb-eee3-47b3-aed6-b462908dc281", 00:22:19.374 "is_configured": true, 00:22:19.374 "data_offset": 2048, 00:22:19.374 "data_size": 63488 00:22:19.374 }, 00:22:19.374 { 00:22:19.374 "name": "BaseBdev3", 00:22:19.374 "uuid": "1fe1091d-d585-4660-a899-ce78e5d9f55a", 00:22:19.374 "is_configured": true, 00:22:19.374 "data_offset": 2048, 00:22:19.374 "data_size": 63488 00:22:19.374 } 00:22:19.374 ] 00:22:19.374 }' 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:19.374 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.941 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:19.941 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:19.941 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:19.941 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:19.941 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:19.941 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:19.941 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:19.941 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:19.941 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.941 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.941 [2024-11-20 13:42:22.663215] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:19.941 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.941 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:19.941 "name": "Existed_Raid", 00:22:19.941 "aliases": [ 00:22:19.941 "33cf9dbd-3b9a-490e-a1ab-36e6e76a2e04" 00:22:19.941 ], 00:22:19.941 "product_name": "Raid Volume", 00:22:19.941 "block_size": 512, 00:22:19.942 "num_blocks": 190464, 00:22:19.942 "uuid": "33cf9dbd-3b9a-490e-a1ab-36e6e76a2e04", 00:22:19.942 "assigned_rate_limits": { 00:22:19.942 "rw_ios_per_sec": 0, 00:22:19.942 "rw_mbytes_per_sec": 0, 00:22:19.942 "r_mbytes_per_sec": 0, 00:22:19.942 "w_mbytes_per_sec": 0 00:22:19.942 }, 00:22:19.942 "claimed": false, 00:22:19.942 "zoned": false, 00:22:19.942 "supported_io_types": { 00:22:19.942 "read": true, 00:22:19.942 "write": true, 00:22:19.942 "unmap": true, 00:22:19.942 "flush": true, 00:22:19.942 "reset": true, 00:22:19.942 "nvme_admin": false, 00:22:19.942 "nvme_io": false, 00:22:19.942 "nvme_io_md": false, 00:22:19.942 "write_zeroes": true, 00:22:19.942 "zcopy": false, 00:22:19.942 "get_zone_info": false, 00:22:19.942 "zone_management": false, 00:22:19.942 "zone_append": false, 00:22:19.942 "compare": false, 00:22:19.942 "compare_and_write": false, 00:22:19.942 "abort": false, 00:22:19.942 "seek_hole": false, 00:22:19.942 "seek_data": false, 00:22:19.942 "copy": false, 00:22:19.942 "nvme_iov_md": false 00:22:19.942 }, 00:22:19.942 "memory_domains": [ 00:22:19.942 { 00:22:19.942 "dma_device_id": "system", 00:22:19.942 "dma_device_type": 1 00:22:19.942 }, 00:22:19.942 { 00:22:19.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.942 "dma_device_type": 2 00:22:19.942 }, 00:22:19.942 { 00:22:19.942 "dma_device_id": "system", 00:22:19.942 "dma_device_type": 1 00:22:19.942 }, 00:22:19.942 { 00:22:19.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.942 "dma_device_type": 2 00:22:19.942 }, 00:22:19.942 { 00:22:19.942 "dma_device_id": "system", 00:22:19.942 "dma_device_type": 1 00:22:19.942 }, 00:22:19.942 { 00:22:19.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.942 "dma_device_type": 2 00:22:19.942 } 00:22:19.942 ], 00:22:19.942 "driver_specific": { 00:22:19.942 "raid": { 00:22:19.942 "uuid": "33cf9dbd-3b9a-490e-a1ab-36e6e76a2e04", 00:22:19.942 "strip_size_kb": 64, 00:22:19.942 "state": "online", 00:22:19.942 "raid_level": "raid0", 00:22:19.942 "superblock": true, 00:22:19.942 "num_base_bdevs": 3, 00:22:19.942 "num_base_bdevs_discovered": 3, 00:22:19.942 "num_base_bdevs_operational": 3, 00:22:19.942 "base_bdevs_list": [ 00:22:19.942 { 00:22:19.942 "name": "BaseBdev1", 00:22:19.942 "uuid": "92441045-5d75-40f3-a15d-e687005a08da", 00:22:19.942 "is_configured": true, 00:22:19.942 "data_offset": 2048, 00:22:19.942 "data_size": 63488 00:22:19.942 }, 00:22:19.942 { 00:22:19.942 "name": "BaseBdev2", 00:22:19.942 "uuid": "6c47e7cb-eee3-47b3-aed6-b462908dc281", 00:22:19.942 "is_configured": true, 00:22:19.942 "data_offset": 2048, 00:22:19.942 "data_size": 63488 00:22:19.942 }, 00:22:19.942 { 00:22:19.942 "name": "BaseBdev3", 00:22:19.942 "uuid": "1fe1091d-d585-4660-a899-ce78e5d9f55a", 00:22:19.942 "is_configured": true, 00:22:19.942 "data_offset": 2048, 00:22:19.942 "data_size": 63488 00:22:19.942 } 00:22:19.942 ] 00:22:19.942 } 00:22:19.942 } 00:22:19.942 }' 00:22:19.942 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:19.942 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:19.942 BaseBdev2 00:22:19.942 BaseBdev3' 00:22:19.942 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.942 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:19.942 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:19.942 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:19.942 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.942 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:19.942 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.942 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.200 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:20.200 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:20.200 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:20.200 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:20.200 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.200 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:20.200 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.200 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.200 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:20.200 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:20.200 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:20.200 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:20.200 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:20.200 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.200 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.200 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.200 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:20.200 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:20.200 13:42:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:20.200 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.200 13:42:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.200 [2024-11-20 13:42:22.962769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:20.201 [2024-11-20 13:42:22.962953] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:20.201 [2024-11-20 13:42:22.963186] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.201 "name": "Existed_Raid", 00:22:20.201 "uuid": "33cf9dbd-3b9a-490e-a1ab-36e6e76a2e04", 00:22:20.201 "strip_size_kb": 64, 00:22:20.201 "state": "offline", 00:22:20.201 "raid_level": "raid0", 00:22:20.201 "superblock": true, 00:22:20.201 "num_base_bdevs": 3, 00:22:20.201 "num_base_bdevs_discovered": 2, 00:22:20.201 "num_base_bdevs_operational": 2, 00:22:20.201 "base_bdevs_list": [ 00:22:20.201 { 00:22:20.201 "name": null, 00:22:20.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.201 "is_configured": false, 00:22:20.201 "data_offset": 0, 00:22:20.201 "data_size": 63488 00:22:20.201 }, 00:22:20.201 { 00:22:20.201 "name": "BaseBdev2", 00:22:20.201 "uuid": "6c47e7cb-eee3-47b3-aed6-b462908dc281", 00:22:20.201 "is_configured": true, 00:22:20.201 "data_offset": 2048, 00:22:20.201 "data_size": 63488 00:22:20.201 }, 00:22:20.201 { 00:22:20.201 "name": "BaseBdev3", 00:22:20.201 "uuid": "1fe1091d-d585-4660-a899-ce78e5d9f55a", 00:22:20.201 "is_configured": true, 00:22:20.201 "data_offset": 2048, 00:22:20.201 "data_size": 63488 00:22:20.201 } 00:22:20.201 ] 00:22:20.201 }' 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.201 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.767 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:20.767 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:20.767 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:20.767 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.767 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.767 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.767 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.767 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:20.767 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:20.767 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:20.767 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.767 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.767 [2024-11-20 13:42:23.614727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:21.026 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.026 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:21.026 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:21.026 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.027 [2024-11-20 13:42:23.757089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:21.027 [2024-11-20 13:42:23.757190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.027 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.287 BaseBdev2 00:22:21.287 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.287 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:21.287 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:21.287 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:21.287 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:21.287 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:21.287 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:21.287 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:21.287 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.287 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.287 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.287 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:21.287 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.287 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.287 [ 00:22:21.287 { 00:22:21.287 "name": "BaseBdev2", 00:22:21.287 "aliases": [ 00:22:21.287 "28b69cd0-cd99-47ec-a1ab-f0bb82770e9c" 00:22:21.287 ], 00:22:21.287 "product_name": "Malloc disk", 00:22:21.287 "block_size": 512, 00:22:21.287 "num_blocks": 65536, 00:22:21.287 "uuid": "28b69cd0-cd99-47ec-a1ab-f0bb82770e9c", 00:22:21.287 "assigned_rate_limits": { 00:22:21.287 "rw_ios_per_sec": 0, 00:22:21.287 "rw_mbytes_per_sec": 0, 00:22:21.287 "r_mbytes_per_sec": 0, 00:22:21.287 "w_mbytes_per_sec": 0 00:22:21.287 }, 00:22:21.287 "claimed": false, 00:22:21.287 "zoned": false, 00:22:21.287 "supported_io_types": { 00:22:21.287 "read": true, 00:22:21.287 "write": true, 00:22:21.287 "unmap": true, 00:22:21.287 "flush": true, 00:22:21.287 "reset": true, 00:22:21.287 "nvme_admin": false, 00:22:21.287 "nvme_io": false, 00:22:21.287 "nvme_io_md": false, 00:22:21.287 "write_zeroes": true, 00:22:21.287 "zcopy": true, 00:22:21.287 "get_zone_info": false, 00:22:21.287 "zone_management": false, 00:22:21.287 "zone_append": false, 00:22:21.287 "compare": false, 00:22:21.287 "compare_and_write": false, 00:22:21.287 "abort": true, 00:22:21.287 "seek_hole": false, 00:22:21.287 "seek_data": false, 00:22:21.287 "copy": true, 00:22:21.287 "nvme_iov_md": false 00:22:21.287 }, 00:22:21.287 "memory_domains": [ 00:22:21.287 { 00:22:21.287 "dma_device_id": "system", 00:22:21.287 "dma_device_type": 1 00:22:21.287 }, 00:22:21.287 { 00:22:21.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.287 "dma_device_type": 2 00:22:21.287 } 00:22:21.287 ], 00:22:21.287 "driver_specific": {} 00:22:21.287 } 00:22:21.287 ] 00:22:21.287 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.287 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:21.287 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:21.287 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:21.287 13:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:21.287 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.287 13:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.287 BaseBdev3 00:22:21.287 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.287 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:21.287 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:21.287 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:21.287 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:21.287 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:21.287 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:21.287 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:21.287 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.287 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.287 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.287 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:21.287 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.287 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.287 [ 00:22:21.287 { 00:22:21.287 "name": "BaseBdev3", 00:22:21.287 "aliases": [ 00:22:21.287 "b069bb61-f2e5-4ccf-893d-5f03f1eaf9b4" 00:22:21.287 ], 00:22:21.287 "product_name": "Malloc disk", 00:22:21.287 "block_size": 512, 00:22:21.287 "num_blocks": 65536, 00:22:21.287 "uuid": "b069bb61-f2e5-4ccf-893d-5f03f1eaf9b4", 00:22:21.287 "assigned_rate_limits": { 00:22:21.287 "rw_ios_per_sec": 0, 00:22:21.287 "rw_mbytes_per_sec": 0, 00:22:21.287 "r_mbytes_per_sec": 0, 00:22:21.287 "w_mbytes_per_sec": 0 00:22:21.287 }, 00:22:21.287 "claimed": false, 00:22:21.287 "zoned": false, 00:22:21.287 "supported_io_types": { 00:22:21.287 "read": true, 00:22:21.287 "write": true, 00:22:21.287 "unmap": true, 00:22:21.287 "flush": true, 00:22:21.287 "reset": true, 00:22:21.287 "nvme_admin": false, 00:22:21.287 "nvme_io": false, 00:22:21.287 "nvme_io_md": false, 00:22:21.287 "write_zeroes": true, 00:22:21.287 "zcopy": true, 00:22:21.287 "get_zone_info": false, 00:22:21.288 "zone_management": false, 00:22:21.288 "zone_append": false, 00:22:21.288 "compare": false, 00:22:21.288 "compare_and_write": false, 00:22:21.288 "abort": true, 00:22:21.288 "seek_hole": false, 00:22:21.288 "seek_data": false, 00:22:21.288 "copy": true, 00:22:21.288 "nvme_iov_md": false 00:22:21.288 }, 00:22:21.288 "memory_domains": [ 00:22:21.288 { 00:22:21.288 "dma_device_id": "system", 00:22:21.288 "dma_device_type": 1 00:22:21.288 }, 00:22:21.288 { 00:22:21.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.288 "dma_device_type": 2 00:22:21.288 } 00:22:21.288 ], 00:22:21.288 "driver_specific": {} 00:22:21.288 } 00:22:21.288 ] 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.288 [2024-11-20 13:42:24.057327] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:21.288 [2024-11-20 13:42:24.057387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:21.288 [2024-11-20 13:42:24.057426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:21.288 [2024-11-20 13:42:24.059949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:21.288 "name": "Existed_Raid", 00:22:21.288 "uuid": "aa46362e-41a7-4086-aec2-db1ed3532ffd", 00:22:21.288 "strip_size_kb": 64, 00:22:21.288 "state": "configuring", 00:22:21.288 "raid_level": "raid0", 00:22:21.288 "superblock": true, 00:22:21.288 "num_base_bdevs": 3, 00:22:21.288 "num_base_bdevs_discovered": 2, 00:22:21.288 "num_base_bdevs_operational": 3, 00:22:21.288 "base_bdevs_list": [ 00:22:21.288 { 00:22:21.288 "name": "BaseBdev1", 00:22:21.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.288 "is_configured": false, 00:22:21.288 "data_offset": 0, 00:22:21.288 "data_size": 0 00:22:21.288 }, 00:22:21.288 { 00:22:21.288 "name": "BaseBdev2", 00:22:21.288 "uuid": "28b69cd0-cd99-47ec-a1ab-f0bb82770e9c", 00:22:21.288 "is_configured": true, 00:22:21.288 "data_offset": 2048, 00:22:21.288 "data_size": 63488 00:22:21.288 }, 00:22:21.288 { 00:22:21.288 "name": "BaseBdev3", 00:22:21.288 "uuid": "b069bb61-f2e5-4ccf-893d-5f03f1eaf9b4", 00:22:21.288 "is_configured": true, 00:22:21.288 "data_offset": 2048, 00:22:21.288 "data_size": 63488 00:22:21.288 } 00:22:21.288 ] 00:22:21.288 }' 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:21.288 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.855 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:21.855 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.855 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.855 [2024-11-20 13:42:24.597489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:21.855 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.855 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:21.855 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:21.855 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:21.855 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:21.855 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:21.855 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:21.855 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:21.855 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:21.855 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:21.855 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:21.855 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.855 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:21.855 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.855 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:21.855 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.855 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:21.855 "name": "Existed_Raid", 00:22:21.855 "uuid": "aa46362e-41a7-4086-aec2-db1ed3532ffd", 00:22:21.855 "strip_size_kb": 64, 00:22:21.855 "state": "configuring", 00:22:21.855 "raid_level": "raid0", 00:22:21.855 "superblock": true, 00:22:21.855 "num_base_bdevs": 3, 00:22:21.855 "num_base_bdevs_discovered": 1, 00:22:21.855 "num_base_bdevs_operational": 3, 00:22:21.855 "base_bdevs_list": [ 00:22:21.855 { 00:22:21.855 "name": "BaseBdev1", 00:22:21.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.855 "is_configured": false, 00:22:21.855 "data_offset": 0, 00:22:21.855 "data_size": 0 00:22:21.855 }, 00:22:21.855 { 00:22:21.855 "name": null, 00:22:21.855 "uuid": "28b69cd0-cd99-47ec-a1ab-f0bb82770e9c", 00:22:21.855 "is_configured": false, 00:22:21.855 "data_offset": 0, 00:22:21.855 "data_size": 63488 00:22:21.855 }, 00:22:21.855 { 00:22:21.855 "name": "BaseBdev3", 00:22:21.855 "uuid": "b069bb61-f2e5-4ccf-893d-5f03f1eaf9b4", 00:22:21.855 "is_configured": true, 00:22:21.855 "data_offset": 2048, 00:22:21.855 "data_size": 63488 00:22:21.855 } 00:22:21.855 ] 00:22:21.855 }' 00:22:21.855 13:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:21.855 13:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.423 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.423 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.423 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.423 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.424 [2024-11-20 13:42:25.208876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:22.424 BaseBdev1 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.424 [ 00:22:22.424 { 00:22:22.424 "name": "BaseBdev1", 00:22:22.424 "aliases": [ 00:22:22.424 "adbd9986-deaa-4a12-8add-58a7cbe415bf" 00:22:22.424 ], 00:22:22.424 "product_name": "Malloc disk", 00:22:22.424 "block_size": 512, 00:22:22.424 "num_blocks": 65536, 00:22:22.424 "uuid": "adbd9986-deaa-4a12-8add-58a7cbe415bf", 00:22:22.424 "assigned_rate_limits": { 00:22:22.424 "rw_ios_per_sec": 0, 00:22:22.424 "rw_mbytes_per_sec": 0, 00:22:22.424 "r_mbytes_per_sec": 0, 00:22:22.424 "w_mbytes_per_sec": 0 00:22:22.424 }, 00:22:22.424 "claimed": true, 00:22:22.424 "claim_type": "exclusive_write", 00:22:22.424 "zoned": false, 00:22:22.424 "supported_io_types": { 00:22:22.424 "read": true, 00:22:22.424 "write": true, 00:22:22.424 "unmap": true, 00:22:22.424 "flush": true, 00:22:22.424 "reset": true, 00:22:22.424 "nvme_admin": false, 00:22:22.424 "nvme_io": false, 00:22:22.424 "nvme_io_md": false, 00:22:22.424 "write_zeroes": true, 00:22:22.424 "zcopy": true, 00:22:22.424 "get_zone_info": false, 00:22:22.424 "zone_management": false, 00:22:22.424 "zone_append": false, 00:22:22.424 "compare": false, 00:22:22.424 "compare_and_write": false, 00:22:22.424 "abort": true, 00:22:22.424 "seek_hole": false, 00:22:22.424 "seek_data": false, 00:22:22.424 "copy": true, 00:22:22.424 "nvme_iov_md": false 00:22:22.424 }, 00:22:22.424 "memory_domains": [ 00:22:22.424 { 00:22:22.424 "dma_device_id": "system", 00:22:22.424 "dma_device_type": 1 00:22:22.424 }, 00:22:22.424 { 00:22:22.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:22.424 "dma_device_type": 2 00:22:22.424 } 00:22:22.424 ], 00:22:22.424 "driver_specific": {} 00:22:22.424 } 00:22:22.424 ] 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:22.424 "name": "Existed_Raid", 00:22:22.424 "uuid": "aa46362e-41a7-4086-aec2-db1ed3532ffd", 00:22:22.424 "strip_size_kb": 64, 00:22:22.424 "state": "configuring", 00:22:22.424 "raid_level": "raid0", 00:22:22.424 "superblock": true, 00:22:22.424 "num_base_bdevs": 3, 00:22:22.424 "num_base_bdevs_discovered": 2, 00:22:22.424 "num_base_bdevs_operational": 3, 00:22:22.424 "base_bdevs_list": [ 00:22:22.424 { 00:22:22.424 "name": "BaseBdev1", 00:22:22.424 "uuid": "adbd9986-deaa-4a12-8add-58a7cbe415bf", 00:22:22.424 "is_configured": true, 00:22:22.424 "data_offset": 2048, 00:22:22.424 "data_size": 63488 00:22:22.424 }, 00:22:22.424 { 00:22:22.424 "name": null, 00:22:22.424 "uuid": "28b69cd0-cd99-47ec-a1ab-f0bb82770e9c", 00:22:22.424 "is_configured": false, 00:22:22.424 "data_offset": 0, 00:22:22.424 "data_size": 63488 00:22:22.424 }, 00:22:22.424 { 00:22:22.424 "name": "BaseBdev3", 00:22:22.424 "uuid": "b069bb61-f2e5-4ccf-893d-5f03f1eaf9b4", 00:22:22.424 "is_configured": true, 00:22:22.424 "data_offset": 2048, 00:22:22.424 "data_size": 63488 00:22:22.424 } 00:22:22.424 ] 00:22:22.424 }' 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:22.424 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.992 [2024-11-20 13:42:25.813164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.992 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:22.992 "name": "Existed_Raid", 00:22:22.992 "uuid": "aa46362e-41a7-4086-aec2-db1ed3532ffd", 00:22:22.992 "strip_size_kb": 64, 00:22:22.992 "state": "configuring", 00:22:22.993 "raid_level": "raid0", 00:22:22.993 "superblock": true, 00:22:22.993 "num_base_bdevs": 3, 00:22:22.993 "num_base_bdevs_discovered": 1, 00:22:22.993 "num_base_bdevs_operational": 3, 00:22:22.993 "base_bdevs_list": [ 00:22:22.993 { 00:22:22.993 "name": "BaseBdev1", 00:22:22.993 "uuid": "adbd9986-deaa-4a12-8add-58a7cbe415bf", 00:22:22.993 "is_configured": true, 00:22:22.993 "data_offset": 2048, 00:22:22.993 "data_size": 63488 00:22:22.993 }, 00:22:22.993 { 00:22:22.993 "name": null, 00:22:22.993 "uuid": "28b69cd0-cd99-47ec-a1ab-f0bb82770e9c", 00:22:22.993 "is_configured": false, 00:22:22.993 "data_offset": 0, 00:22:22.993 "data_size": 63488 00:22:22.993 }, 00:22:22.993 { 00:22:22.993 "name": null, 00:22:22.993 "uuid": "b069bb61-f2e5-4ccf-893d-5f03f1eaf9b4", 00:22:22.993 "is_configured": false, 00:22:22.993 "data_offset": 0, 00:22:22.993 "data_size": 63488 00:22:22.993 } 00:22:22.993 ] 00:22:22.993 }' 00:22:22.993 13:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:22.993 13:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.560 [2024-11-20 13:42:26.389341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.560 13:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:23.560 "name": "Existed_Raid", 00:22:23.560 "uuid": "aa46362e-41a7-4086-aec2-db1ed3532ffd", 00:22:23.560 "strip_size_kb": 64, 00:22:23.560 "state": "configuring", 00:22:23.560 "raid_level": "raid0", 00:22:23.560 "superblock": true, 00:22:23.560 "num_base_bdevs": 3, 00:22:23.560 "num_base_bdevs_discovered": 2, 00:22:23.560 "num_base_bdevs_operational": 3, 00:22:23.560 "base_bdevs_list": [ 00:22:23.560 { 00:22:23.560 "name": "BaseBdev1", 00:22:23.560 "uuid": "adbd9986-deaa-4a12-8add-58a7cbe415bf", 00:22:23.560 "is_configured": true, 00:22:23.560 "data_offset": 2048, 00:22:23.560 "data_size": 63488 00:22:23.560 }, 00:22:23.560 { 00:22:23.560 "name": null, 00:22:23.560 "uuid": "28b69cd0-cd99-47ec-a1ab-f0bb82770e9c", 00:22:23.560 "is_configured": false, 00:22:23.560 "data_offset": 0, 00:22:23.560 "data_size": 63488 00:22:23.560 }, 00:22:23.560 { 00:22:23.560 "name": "BaseBdev3", 00:22:23.560 "uuid": "b069bb61-f2e5-4ccf-893d-5f03f1eaf9b4", 00:22:23.560 "is_configured": true, 00:22:23.561 "data_offset": 2048, 00:22:23.561 "data_size": 63488 00:22:23.561 } 00:22:23.561 ] 00:22:23.561 }' 00:22:23.561 13:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:23.561 13:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.126 13:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:24.126 13:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.126 13:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.126 13:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.126 13:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.126 13:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:24.126 13:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:24.126 13:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.126 13:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.126 [2024-11-20 13:42:26.981518] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:24.385 13:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.385 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:24.385 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:24.385 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:24.385 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:24.385 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:24.385 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:24.385 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:24.385 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:24.385 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:24.385 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:24.385 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.385 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:24.385 13:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.385 13:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.385 13:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.385 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:24.385 "name": "Existed_Raid", 00:22:24.385 "uuid": "aa46362e-41a7-4086-aec2-db1ed3532ffd", 00:22:24.385 "strip_size_kb": 64, 00:22:24.385 "state": "configuring", 00:22:24.385 "raid_level": "raid0", 00:22:24.385 "superblock": true, 00:22:24.385 "num_base_bdevs": 3, 00:22:24.385 "num_base_bdevs_discovered": 1, 00:22:24.385 "num_base_bdevs_operational": 3, 00:22:24.385 "base_bdevs_list": [ 00:22:24.385 { 00:22:24.385 "name": null, 00:22:24.385 "uuid": "adbd9986-deaa-4a12-8add-58a7cbe415bf", 00:22:24.385 "is_configured": false, 00:22:24.385 "data_offset": 0, 00:22:24.385 "data_size": 63488 00:22:24.385 }, 00:22:24.385 { 00:22:24.385 "name": null, 00:22:24.385 "uuid": "28b69cd0-cd99-47ec-a1ab-f0bb82770e9c", 00:22:24.385 "is_configured": false, 00:22:24.385 "data_offset": 0, 00:22:24.385 "data_size": 63488 00:22:24.385 }, 00:22:24.385 { 00:22:24.385 "name": "BaseBdev3", 00:22:24.385 "uuid": "b069bb61-f2e5-4ccf-893d-5f03f1eaf9b4", 00:22:24.385 "is_configured": true, 00:22:24.385 "data_offset": 2048, 00:22:24.385 "data_size": 63488 00:22:24.385 } 00:22:24.385 ] 00:22:24.385 }' 00:22:24.385 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:24.385 13:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.953 [2024-11-20 13:42:27.643160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:24.953 "name": "Existed_Raid", 00:22:24.953 "uuid": "aa46362e-41a7-4086-aec2-db1ed3532ffd", 00:22:24.953 "strip_size_kb": 64, 00:22:24.953 "state": "configuring", 00:22:24.953 "raid_level": "raid0", 00:22:24.953 "superblock": true, 00:22:24.953 "num_base_bdevs": 3, 00:22:24.953 "num_base_bdevs_discovered": 2, 00:22:24.953 "num_base_bdevs_operational": 3, 00:22:24.953 "base_bdevs_list": [ 00:22:24.953 { 00:22:24.953 "name": null, 00:22:24.953 "uuid": "adbd9986-deaa-4a12-8add-58a7cbe415bf", 00:22:24.953 "is_configured": false, 00:22:24.953 "data_offset": 0, 00:22:24.953 "data_size": 63488 00:22:24.953 }, 00:22:24.953 { 00:22:24.953 "name": "BaseBdev2", 00:22:24.953 "uuid": "28b69cd0-cd99-47ec-a1ab-f0bb82770e9c", 00:22:24.953 "is_configured": true, 00:22:24.953 "data_offset": 2048, 00:22:24.953 "data_size": 63488 00:22:24.953 }, 00:22:24.953 { 00:22:24.953 "name": "BaseBdev3", 00:22:24.953 "uuid": "b069bb61-f2e5-4ccf-893d-5f03f1eaf9b4", 00:22:24.953 "is_configured": true, 00:22:24.953 "data_offset": 2048, 00:22:24.953 "data_size": 63488 00:22:24.953 } 00:22:24.953 ] 00:22:24.953 }' 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:24.953 13:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u adbd9986-deaa-4a12-8add-58a7cbe415bf 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.520 [2024-11-20 13:42:28.298309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:25.520 [2024-11-20 13:42:28.298849] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:25.520 [2024-11-20 13:42:28.298882] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:25.520 NewBaseBdev 00:22:25.520 [2024-11-20 13:42:28.299282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:25.520 [2024-11-20 13:42:28.299558] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:25.520 [2024-11-20 13:42:28.299591] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.520 [2024-11-20 13:42:28.299831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.520 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.520 [ 00:22:25.520 { 00:22:25.520 "name": "NewBaseBdev", 00:22:25.520 "aliases": [ 00:22:25.520 "adbd9986-deaa-4a12-8add-58a7cbe415bf" 00:22:25.520 ], 00:22:25.520 "product_name": "Malloc disk", 00:22:25.520 "block_size": 512, 00:22:25.520 "num_blocks": 65536, 00:22:25.520 "uuid": "adbd9986-deaa-4a12-8add-58a7cbe415bf", 00:22:25.520 "assigned_rate_limits": { 00:22:25.521 "rw_ios_per_sec": 0, 00:22:25.521 "rw_mbytes_per_sec": 0, 00:22:25.521 "r_mbytes_per_sec": 0, 00:22:25.521 "w_mbytes_per_sec": 0 00:22:25.521 }, 00:22:25.521 "claimed": true, 00:22:25.521 "claim_type": "exclusive_write", 00:22:25.521 "zoned": false, 00:22:25.521 "supported_io_types": { 00:22:25.521 "read": true, 00:22:25.521 "write": true, 00:22:25.521 "unmap": true, 00:22:25.521 "flush": true, 00:22:25.521 "reset": true, 00:22:25.521 "nvme_admin": false, 00:22:25.521 "nvme_io": false, 00:22:25.521 "nvme_io_md": false, 00:22:25.521 "write_zeroes": true, 00:22:25.521 "zcopy": true, 00:22:25.521 "get_zone_info": false, 00:22:25.521 "zone_management": false, 00:22:25.521 "zone_append": false, 00:22:25.521 "compare": false, 00:22:25.521 "compare_and_write": false, 00:22:25.521 "abort": true, 00:22:25.521 "seek_hole": false, 00:22:25.521 "seek_data": false, 00:22:25.521 "copy": true, 00:22:25.521 "nvme_iov_md": false 00:22:25.521 }, 00:22:25.521 "memory_domains": [ 00:22:25.521 { 00:22:25.521 "dma_device_id": "system", 00:22:25.521 "dma_device_type": 1 00:22:25.521 }, 00:22:25.521 { 00:22:25.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.521 "dma_device_type": 2 00:22:25.521 } 00:22:25.521 ], 00:22:25.521 "driver_specific": {} 00:22:25.521 } 00:22:25.521 ] 00:22:25.521 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.521 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:25.521 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:22:25.521 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:25.521 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:25.521 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:25.521 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:25.521 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:25.521 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:25.521 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:25.521 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:25.521 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:25.521 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.521 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:25.521 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.521 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.521 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.521 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:25.521 "name": "Existed_Raid", 00:22:25.521 "uuid": "aa46362e-41a7-4086-aec2-db1ed3532ffd", 00:22:25.521 "strip_size_kb": 64, 00:22:25.521 "state": "online", 00:22:25.521 "raid_level": "raid0", 00:22:25.521 "superblock": true, 00:22:25.521 "num_base_bdevs": 3, 00:22:25.521 "num_base_bdevs_discovered": 3, 00:22:25.521 "num_base_bdevs_operational": 3, 00:22:25.521 "base_bdevs_list": [ 00:22:25.521 { 00:22:25.521 "name": "NewBaseBdev", 00:22:25.521 "uuid": "adbd9986-deaa-4a12-8add-58a7cbe415bf", 00:22:25.521 "is_configured": true, 00:22:25.521 "data_offset": 2048, 00:22:25.521 "data_size": 63488 00:22:25.521 }, 00:22:25.521 { 00:22:25.521 "name": "BaseBdev2", 00:22:25.521 "uuid": "28b69cd0-cd99-47ec-a1ab-f0bb82770e9c", 00:22:25.521 "is_configured": true, 00:22:25.521 "data_offset": 2048, 00:22:25.521 "data_size": 63488 00:22:25.521 }, 00:22:25.521 { 00:22:25.521 "name": "BaseBdev3", 00:22:25.521 "uuid": "b069bb61-f2e5-4ccf-893d-5f03f1eaf9b4", 00:22:25.521 "is_configured": true, 00:22:25.521 "data_offset": 2048, 00:22:25.521 "data_size": 63488 00:22:25.521 } 00:22:25.521 ] 00:22:25.521 }' 00:22:25.521 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:25.521 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.156 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:26.156 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:26.156 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:26.156 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:26.156 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:26.156 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:26.156 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:26.156 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:26.156 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.156 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.156 [2024-11-20 13:42:28.842890] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:26.156 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.156 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:26.156 "name": "Existed_Raid", 00:22:26.156 "aliases": [ 00:22:26.156 "aa46362e-41a7-4086-aec2-db1ed3532ffd" 00:22:26.156 ], 00:22:26.156 "product_name": "Raid Volume", 00:22:26.156 "block_size": 512, 00:22:26.156 "num_blocks": 190464, 00:22:26.156 "uuid": "aa46362e-41a7-4086-aec2-db1ed3532ffd", 00:22:26.156 "assigned_rate_limits": { 00:22:26.156 "rw_ios_per_sec": 0, 00:22:26.156 "rw_mbytes_per_sec": 0, 00:22:26.156 "r_mbytes_per_sec": 0, 00:22:26.156 "w_mbytes_per_sec": 0 00:22:26.156 }, 00:22:26.156 "claimed": false, 00:22:26.156 "zoned": false, 00:22:26.156 "supported_io_types": { 00:22:26.157 "read": true, 00:22:26.157 "write": true, 00:22:26.157 "unmap": true, 00:22:26.157 "flush": true, 00:22:26.157 "reset": true, 00:22:26.157 "nvme_admin": false, 00:22:26.157 "nvme_io": false, 00:22:26.157 "nvme_io_md": false, 00:22:26.157 "write_zeroes": true, 00:22:26.157 "zcopy": false, 00:22:26.157 "get_zone_info": false, 00:22:26.157 "zone_management": false, 00:22:26.157 "zone_append": false, 00:22:26.157 "compare": false, 00:22:26.157 "compare_and_write": false, 00:22:26.157 "abort": false, 00:22:26.157 "seek_hole": false, 00:22:26.157 "seek_data": false, 00:22:26.157 "copy": false, 00:22:26.157 "nvme_iov_md": false 00:22:26.157 }, 00:22:26.157 "memory_domains": [ 00:22:26.157 { 00:22:26.157 "dma_device_id": "system", 00:22:26.157 "dma_device_type": 1 00:22:26.157 }, 00:22:26.157 { 00:22:26.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.157 "dma_device_type": 2 00:22:26.157 }, 00:22:26.157 { 00:22:26.157 "dma_device_id": "system", 00:22:26.157 "dma_device_type": 1 00:22:26.157 }, 00:22:26.157 { 00:22:26.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.157 "dma_device_type": 2 00:22:26.157 }, 00:22:26.157 { 00:22:26.157 "dma_device_id": "system", 00:22:26.157 "dma_device_type": 1 00:22:26.157 }, 00:22:26.157 { 00:22:26.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.157 "dma_device_type": 2 00:22:26.157 } 00:22:26.157 ], 00:22:26.157 "driver_specific": { 00:22:26.157 "raid": { 00:22:26.157 "uuid": "aa46362e-41a7-4086-aec2-db1ed3532ffd", 00:22:26.157 "strip_size_kb": 64, 00:22:26.157 "state": "online", 00:22:26.157 "raid_level": "raid0", 00:22:26.157 "superblock": true, 00:22:26.157 "num_base_bdevs": 3, 00:22:26.157 "num_base_bdevs_discovered": 3, 00:22:26.157 "num_base_bdevs_operational": 3, 00:22:26.157 "base_bdevs_list": [ 00:22:26.157 { 00:22:26.157 "name": "NewBaseBdev", 00:22:26.157 "uuid": "adbd9986-deaa-4a12-8add-58a7cbe415bf", 00:22:26.157 "is_configured": true, 00:22:26.157 "data_offset": 2048, 00:22:26.157 "data_size": 63488 00:22:26.157 }, 00:22:26.157 { 00:22:26.157 "name": "BaseBdev2", 00:22:26.157 "uuid": "28b69cd0-cd99-47ec-a1ab-f0bb82770e9c", 00:22:26.157 "is_configured": true, 00:22:26.157 "data_offset": 2048, 00:22:26.157 "data_size": 63488 00:22:26.157 }, 00:22:26.157 { 00:22:26.157 "name": "BaseBdev3", 00:22:26.157 "uuid": "b069bb61-f2e5-4ccf-893d-5f03f1eaf9b4", 00:22:26.157 "is_configured": true, 00:22:26.157 "data_offset": 2048, 00:22:26.157 "data_size": 63488 00:22:26.157 } 00:22:26.157 ] 00:22:26.157 } 00:22:26.157 } 00:22:26.157 }' 00:22:26.157 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:26.157 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:26.157 BaseBdev2 00:22:26.157 BaseBdev3' 00:22:26.157 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:26.157 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:26.157 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:26.157 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:26.157 13:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:26.157 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.157 13:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.157 13:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.157 13:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:26.157 13:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:26.157 13:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:26.157 13:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:26.157 13:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.157 13:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:26.157 13:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.157 13:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.417 [2024-11-20 13:42:29.158658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:26.417 [2024-11-20 13:42:29.158701] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:26.417 [2024-11-20 13:42:29.158830] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:26.417 [2024-11-20 13:42:29.158958] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:26.417 [2024-11-20 13:42:29.158999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64623 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64623 ']' 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64623 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64623 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64623' 00:22:26.417 killing process with pid 64623 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64623 00:22:26.417 [2024-11-20 13:42:29.199073] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:26.417 13:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64623 00:22:26.675 [2024-11-20 13:42:29.498701] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:28.050 13:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:22:28.050 ************************************ 00:22:28.050 END TEST raid_state_function_test_sb 00:22:28.050 ************************************ 00:22:28.050 00:22:28.050 real 0m11.953s 00:22:28.050 user 0m19.711s 00:22:28.050 sys 0m1.703s 00:22:28.050 13:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:28.050 13:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.050 13:42:30 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:22:28.050 13:42:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:28.050 13:42:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:28.050 13:42:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:28.050 ************************************ 00:22:28.050 START TEST raid_superblock_test 00:22:28.050 ************************************ 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65260 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65260 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65260 ']' 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:28.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:28.050 13:42:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.050 [2024-11-20 13:42:30.763682] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:22:28.050 [2024-11-20 13:42:30.764148] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65260 ] 00:22:28.050 [2024-11-20 13:42:30.961979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.308 [2024-11-20 13:42:31.127366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.567 [2024-11-20 13:42:31.370868] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:28.567 [2024-11-20 13:42:31.371166] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.135 malloc1 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.135 [2024-11-20 13:42:31.902867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:29.135 [2024-11-20 13:42:31.902988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:29.135 [2024-11-20 13:42:31.903023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:29.135 [2024-11-20 13:42:31.903038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:29.135 [2024-11-20 13:42:31.905865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:29.135 [2024-11-20 13:42:31.906052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:29.135 pt1 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.135 malloc2 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.135 [2024-11-20 13:42:31.951678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:29.135 [2024-11-20 13:42:31.951878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:29.135 [2024-11-20 13:42:31.951944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:29.135 [2024-11-20 13:42:31.951962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:29.135 [2024-11-20 13:42:31.954707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:29.135 [2024-11-20 13:42:31.954752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:29.135 pt2 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.135 13:42:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.135 malloc3 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.135 [2024-11-20 13:42:32.017523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:29.135 [2024-11-20 13:42:32.017717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:29.135 [2024-11-20 13:42:32.017763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:29.135 [2024-11-20 13:42:32.017780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:29.135 [2024-11-20 13:42:32.020653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:29.135 [2024-11-20 13:42:32.020701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:29.135 pt3 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.135 [2024-11-20 13:42:32.029718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:29.135 [2024-11-20 13:42:32.032282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:29.135 [2024-11-20 13:42:32.032538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:29.135 [2024-11-20 13:42:32.032786] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:29.135 [2024-11-20 13:42:32.032811] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:29.135 [2024-11-20 13:42:32.033196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:29.135 [2024-11-20 13:42:32.033415] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:29.135 [2024-11-20 13:42:32.033438] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:29.135 [2024-11-20 13:42:32.033719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.135 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.393 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.394 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:29.394 "name": "raid_bdev1", 00:22:29.394 "uuid": "43826af6-9bc3-4840-80e9-99a7075db78a", 00:22:29.394 "strip_size_kb": 64, 00:22:29.394 "state": "online", 00:22:29.394 "raid_level": "raid0", 00:22:29.394 "superblock": true, 00:22:29.394 "num_base_bdevs": 3, 00:22:29.394 "num_base_bdevs_discovered": 3, 00:22:29.394 "num_base_bdevs_operational": 3, 00:22:29.394 "base_bdevs_list": [ 00:22:29.394 { 00:22:29.394 "name": "pt1", 00:22:29.394 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:29.394 "is_configured": true, 00:22:29.394 "data_offset": 2048, 00:22:29.394 "data_size": 63488 00:22:29.394 }, 00:22:29.394 { 00:22:29.394 "name": "pt2", 00:22:29.394 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:29.394 "is_configured": true, 00:22:29.394 "data_offset": 2048, 00:22:29.394 "data_size": 63488 00:22:29.394 }, 00:22:29.394 { 00:22:29.394 "name": "pt3", 00:22:29.394 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:29.394 "is_configured": true, 00:22:29.394 "data_offset": 2048, 00:22:29.394 "data_size": 63488 00:22:29.394 } 00:22:29.394 ] 00:22:29.394 }' 00:22:29.394 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:29.394 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.652 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:29.652 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:29.652 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:29.652 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:29.652 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:29.652 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:29.652 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:29.652 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:29.652 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.652 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.910 [2024-11-20 13:42:32.566237] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:29.910 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.910 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:29.910 "name": "raid_bdev1", 00:22:29.910 "aliases": [ 00:22:29.910 "43826af6-9bc3-4840-80e9-99a7075db78a" 00:22:29.910 ], 00:22:29.910 "product_name": "Raid Volume", 00:22:29.910 "block_size": 512, 00:22:29.910 "num_blocks": 190464, 00:22:29.910 "uuid": "43826af6-9bc3-4840-80e9-99a7075db78a", 00:22:29.910 "assigned_rate_limits": { 00:22:29.910 "rw_ios_per_sec": 0, 00:22:29.910 "rw_mbytes_per_sec": 0, 00:22:29.910 "r_mbytes_per_sec": 0, 00:22:29.910 "w_mbytes_per_sec": 0 00:22:29.910 }, 00:22:29.910 "claimed": false, 00:22:29.910 "zoned": false, 00:22:29.910 "supported_io_types": { 00:22:29.910 "read": true, 00:22:29.910 "write": true, 00:22:29.910 "unmap": true, 00:22:29.910 "flush": true, 00:22:29.910 "reset": true, 00:22:29.910 "nvme_admin": false, 00:22:29.910 "nvme_io": false, 00:22:29.910 "nvme_io_md": false, 00:22:29.910 "write_zeroes": true, 00:22:29.910 "zcopy": false, 00:22:29.910 "get_zone_info": false, 00:22:29.910 "zone_management": false, 00:22:29.910 "zone_append": false, 00:22:29.910 "compare": false, 00:22:29.910 "compare_and_write": false, 00:22:29.910 "abort": false, 00:22:29.910 "seek_hole": false, 00:22:29.910 "seek_data": false, 00:22:29.910 "copy": false, 00:22:29.910 "nvme_iov_md": false 00:22:29.910 }, 00:22:29.910 "memory_domains": [ 00:22:29.910 { 00:22:29.910 "dma_device_id": "system", 00:22:29.910 "dma_device_type": 1 00:22:29.910 }, 00:22:29.910 { 00:22:29.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:29.910 "dma_device_type": 2 00:22:29.910 }, 00:22:29.910 { 00:22:29.910 "dma_device_id": "system", 00:22:29.910 "dma_device_type": 1 00:22:29.910 }, 00:22:29.910 { 00:22:29.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:29.910 "dma_device_type": 2 00:22:29.910 }, 00:22:29.910 { 00:22:29.910 "dma_device_id": "system", 00:22:29.910 "dma_device_type": 1 00:22:29.910 }, 00:22:29.910 { 00:22:29.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:29.910 "dma_device_type": 2 00:22:29.910 } 00:22:29.910 ], 00:22:29.910 "driver_specific": { 00:22:29.910 "raid": { 00:22:29.910 "uuid": "43826af6-9bc3-4840-80e9-99a7075db78a", 00:22:29.910 "strip_size_kb": 64, 00:22:29.910 "state": "online", 00:22:29.910 "raid_level": "raid0", 00:22:29.910 "superblock": true, 00:22:29.910 "num_base_bdevs": 3, 00:22:29.910 "num_base_bdevs_discovered": 3, 00:22:29.910 "num_base_bdevs_operational": 3, 00:22:29.911 "base_bdevs_list": [ 00:22:29.911 { 00:22:29.911 "name": "pt1", 00:22:29.911 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:29.911 "is_configured": true, 00:22:29.911 "data_offset": 2048, 00:22:29.911 "data_size": 63488 00:22:29.911 }, 00:22:29.911 { 00:22:29.911 "name": "pt2", 00:22:29.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:29.911 "is_configured": true, 00:22:29.911 "data_offset": 2048, 00:22:29.911 "data_size": 63488 00:22:29.911 }, 00:22:29.911 { 00:22:29.911 "name": "pt3", 00:22:29.911 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:29.911 "is_configured": true, 00:22:29.911 "data_offset": 2048, 00:22:29.911 "data_size": 63488 00:22:29.911 } 00:22:29.911 ] 00:22:29.911 } 00:22:29.911 } 00:22:29.911 }' 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:29.911 pt2 00:22:29.911 pt3' 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.911 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.170 [2024-11-20 13:42:32.882308] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=43826af6-9bc3-4840-80e9-99a7075db78a 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 43826af6-9bc3-4840-80e9-99a7075db78a ']' 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.170 [2024-11-20 13:42:32.933914] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:30.170 [2024-11-20 13:42:32.933949] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:30.170 [2024-11-20 13:42:32.934049] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:30.170 [2024-11-20 13:42:32.934132] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:30.170 [2024-11-20 13:42:32.934147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.170 13:42:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.170 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.429 [2024-11-20 13:42:33.082030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:30.429 [2024-11-20 13:42:33.084504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:30.429 [2024-11-20 13:42:33.084570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:30.429 [2024-11-20 13:42:33.084645] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:30.429 [2024-11-20 13:42:33.084720] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:30.429 [2024-11-20 13:42:33.084764] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:30.429 [2024-11-20 13:42:33.084791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:30.429 [2024-11-20 13:42:33.084806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:30.429 request: 00:22:30.429 { 00:22:30.429 "name": "raid_bdev1", 00:22:30.429 "raid_level": "raid0", 00:22:30.429 "base_bdevs": [ 00:22:30.429 "malloc1", 00:22:30.429 "malloc2", 00:22:30.429 "malloc3" 00:22:30.429 ], 00:22:30.429 "strip_size_kb": 64, 00:22:30.429 "superblock": false, 00:22:30.429 "method": "bdev_raid_create", 00:22:30.429 "req_id": 1 00:22:30.429 } 00:22:30.429 Got JSON-RPC error response 00:22:30.429 response: 00:22:30.429 { 00:22:30.429 "code": -17, 00:22:30.429 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:30.429 } 00:22:30.429 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:30.429 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:22:30.429 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:30.429 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:30.429 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:30.429 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.430 [2024-11-20 13:42:33.153932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:30.430 [2024-11-20 13:42:33.154011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.430 [2024-11-20 13:42:33.154042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:30.430 [2024-11-20 13:42:33.154057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.430 [2024-11-20 13:42:33.157119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.430 [2024-11-20 13:42:33.157172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:30.430 [2024-11-20 13:42:33.157291] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:30.430 [2024-11-20 13:42:33.157378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:30.430 pt1 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:30.430 "name": "raid_bdev1", 00:22:30.430 "uuid": "43826af6-9bc3-4840-80e9-99a7075db78a", 00:22:30.430 "strip_size_kb": 64, 00:22:30.430 "state": "configuring", 00:22:30.430 "raid_level": "raid0", 00:22:30.430 "superblock": true, 00:22:30.430 "num_base_bdevs": 3, 00:22:30.430 "num_base_bdevs_discovered": 1, 00:22:30.430 "num_base_bdevs_operational": 3, 00:22:30.430 "base_bdevs_list": [ 00:22:30.430 { 00:22:30.430 "name": "pt1", 00:22:30.430 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:30.430 "is_configured": true, 00:22:30.430 "data_offset": 2048, 00:22:30.430 "data_size": 63488 00:22:30.430 }, 00:22:30.430 { 00:22:30.430 "name": null, 00:22:30.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:30.430 "is_configured": false, 00:22:30.430 "data_offset": 2048, 00:22:30.430 "data_size": 63488 00:22:30.430 }, 00:22:30.430 { 00:22:30.430 "name": null, 00:22:30.430 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:30.430 "is_configured": false, 00:22:30.430 "data_offset": 2048, 00:22:30.430 "data_size": 63488 00:22:30.430 } 00:22:30.430 ] 00:22:30.430 }' 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:30.430 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.998 [2024-11-20 13:42:33.678153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:30.998 [2024-11-20 13:42:33.678246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.998 [2024-11-20 13:42:33.678289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:30.998 [2024-11-20 13:42:33.678305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.998 [2024-11-20 13:42:33.678878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.998 [2024-11-20 13:42:33.678943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:30.998 [2024-11-20 13:42:33.679060] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:30.998 [2024-11-20 13:42:33.679102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:30.998 pt2 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.998 [2024-11-20 13:42:33.686088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:30.998 "name": "raid_bdev1", 00:22:30.998 "uuid": "43826af6-9bc3-4840-80e9-99a7075db78a", 00:22:30.998 "strip_size_kb": 64, 00:22:30.998 "state": "configuring", 00:22:30.998 "raid_level": "raid0", 00:22:30.998 "superblock": true, 00:22:30.998 "num_base_bdevs": 3, 00:22:30.998 "num_base_bdevs_discovered": 1, 00:22:30.998 "num_base_bdevs_operational": 3, 00:22:30.998 "base_bdevs_list": [ 00:22:30.998 { 00:22:30.998 "name": "pt1", 00:22:30.998 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:30.998 "is_configured": true, 00:22:30.998 "data_offset": 2048, 00:22:30.998 "data_size": 63488 00:22:30.998 }, 00:22:30.998 { 00:22:30.998 "name": null, 00:22:30.998 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:30.998 "is_configured": false, 00:22:30.998 "data_offset": 0, 00:22:30.998 "data_size": 63488 00:22:30.998 }, 00:22:30.998 { 00:22:30.998 "name": null, 00:22:30.998 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:30.998 "is_configured": false, 00:22:30.998 "data_offset": 2048, 00:22:30.998 "data_size": 63488 00:22:30.998 } 00:22:30.998 ] 00:22:30.998 }' 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:30.998 13:42:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.564 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:31.564 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:31.564 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:31.564 13:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.564 13:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.564 [2024-11-20 13:42:34.266251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:31.564 [2024-11-20 13:42:34.266351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:31.565 [2024-11-20 13:42:34.266392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:31.565 [2024-11-20 13:42:34.266419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:31.565 [2024-11-20 13:42:34.267242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:31.565 [2024-11-20 13:42:34.267300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:31.565 [2024-11-20 13:42:34.267458] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:31.565 [2024-11-20 13:42:34.267518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:31.565 pt2 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.565 [2024-11-20 13:42:34.274264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:31.565 [2024-11-20 13:42:34.274520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:31.565 [2024-11-20 13:42:34.274573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:31.565 [2024-11-20 13:42:34.274610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:31.565 [2024-11-20 13:42:34.275246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:31.565 [2024-11-20 13:42:34.275314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:31.565 [2024-11-20 13:42:34.275432] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:31.565 [2024-11-20 13:42:34.275484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:31.565 [2024-11-20 13:42:34.275699] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:31.565 [2024-11-20 13:42:34.275741] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:31.565 [2024-11-20 13:42:34.276188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:31.565 [2024-11-20 13:42:34.276492] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:31.565 [2024-11-20 13:42:34.276517] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:31.565 [2024-11-20 13:42:34.276710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:31.565 pt3 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:31.565 "name": "raid_bdev1", 00:22:31.565 "uuid": "43826af6-9bc3-4840-80e9-99a7075db78a", 00:22:31.565 "strip_size_kb": 64, 00:22:31.565 "state": "online", 00:22:31.565 "raid_level": "raid0", 00:22:31.565 "superblock": true, 00:22:31.565 "num_base_bdevs": 3, 00:22:31.565 "num_base_bdevs_discovered": 3, 00:22:31.565 "num_base_bdevs_operational": 3, 00:22:31.565 "base_bdevs_list": [ 00:22:31.565 { 00:22:31.565 "name": "pt1", 00:22:31.565 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:31.565 "is_configured": true, 00:22:31.565 "data_offset": 2048, 00:22:31.565 "data_size": 63488 00:22:31.565 }, 00:22:31.565 { 00:22:31.565 "name": "pt2", 00:22:31.565 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:31.565 "is_configured": true, 00:22:31.565 "data_offset": 2048, 00:22:31.565 "data_size": 63488 00:22:31.565 }, 00:22:31.565 { 00:22:31.565 "name": "pt3", 00:22:31.565 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:31.565 "is_configured": true, 00:22:31.565 "data_offset": 2048, 00:22:31.565 "data_size": 63488 00:22:31.565 } 00:22:31.565 ] 00:22:31.565 }' 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:31.565 13:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.132 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:32.132 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:32.132 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:32.132 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:32.132 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:32.132 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:32.132 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:32.132 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:32.132 13:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.132 13:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.132 [2024-11-20 13:42:34.786832] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:32.132 13:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.132 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:32.132 "name": "raid_bdev1", 00:22:32.132 "aliases": [ 00:22:32.132 "43826af6-9bc3-4840-80e9-99a7075db78a" 00:22:32.132 ], 00:22:32.132 "product_name": "Raid Volume", 00:22:32.132 "block_size": 512, 00:22:32.132 "num_blocks": 190464, 00:22:32.132 "uuid": "43826af6-9bc3-4840-80e9-99a7075db78a", 00:22:32.132 "assigned_rate_limits": { 00:22:32.132 "rw_ios_per_sec": 0, 00:22:32.132 "rw_mbytes_per_sec": 0, 00:22:32.132 "r_mbytes_per_sec": 0, 00:22:32.132 "w_mbytes_per_sec": 0 00:22:32.132 }, 00:22:32.132 "claimed": false, 00:22:32.132 "zoned": false, 00:22:32.132 "supported_io_types": { 00:22:32.132 "read": true, 00:22:32.132 "write": true, 00:22:32.132 "unmap": true, 00:22:32.132 "flush": true, 00:22:32.132 "reset": true, 00:22:32.132 "nvme_admin": false, 00:22:32.132 "nvme_io": false, 00:22:32.132 "nvme_io_md": false, 00:22:32.132 "write_zeroes": true, 00:22:32.132 "zcopy": false, 00:22:32.132 "get_zone_info": false, 00:22:32.132 "zone_management": false, 00:22:32.132 "zone_append": false, 00:22:32.132 "compare": false, 00:22:32.132 "compare_and_write": false, 00:22:32.132 "abort": false, 00:22:32.132 "seek_hole": false, 00:22:32.132 "seek_data": false, 00:22:32.132 "copy": false, 00:22:32.132 "nvme_iov_md": false 00:22:32.132 }, 00:22:32.132 "memory_domains": [ 00:22:32.132 { 00:22:32.132 "dma_device_id": "system", 00:22:32.132 "dma_device_type": 1 00:22:32.132 }, 00:22:32.132 { 00:22:32.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:32.132 "dma_device_type": 2 00:22:32.132 }, 00:22:32.132 { 00:22:32.132 "dma_device_id": "system", 00:22:32.132 "dma_device_type": 1 00:22:32.132 }, 00:22:32.132 { 00:22:32.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:32.132 "dma_device_type": 2 00:22:32.132 }, 00:22:32.132 { 00:22:32.132 "dma_device_id": "system", 00:22:32.132 "dma_device_type": 1 00:22:32.132 }, 00:22:32.132 { 00:22:32.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:32.132 "dma_device_type": 2 00:22:32.132 } 00:22:32.132 ], 00:22:32.132 "driver_specific": { 00:22:32.132 "raid": { 00:22:32.132 "uuid": "43826af6-9bc3-4840-80e9-99a7075db78a", 00:22:32.132 "strip_size_kb": 64, 00:22:32.132 "state": "online", 00:22:32.132 "raid_level": "raid0", 00:22:32.133 "superblock": true, 00:22:32.133 "num_base_bdevs": 3, 00:22:32.133 "num_base_bdevs_discovered": 3, 00:22:32.133 "num_base_bdevs_operational": 3, 00:22:32.133 "base_bdevs_list": [ 00:22:32.133 { 00:22:32.133 "name": "pt1", 00:22:32.133 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:32.133 "is_configured": true, 00:22:32.133 "data_offset": 2048, 00:22:32.133 "data_size": 63488 00:22:32.133 }, 00:22:32.133 { 00:22:32.133 "name": "pt2", 00:22:32.133 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:32.133 "is_configured": true, 00:22:32.133 "data_offset": 2048, 00:22:32.133 "data_size": 63488 00:22:32.133 }, 00:22:32.133 { 00:22:32.133 "name": "pt3", 00:22:32.133 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:32.133 "is_configured": true, 00:22:32.133 "data_offset": 2048, 00:22:32.133 "data_size": 63488 00:22:32.133 } 00:22:32.133 ] 00:22:32.133 } 00:22:32.133 } 00:22:32.133 }' 00:22:32.133 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:32.133 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:32.133 pt2 00:22:32.133 pt3' 00:22:32.133 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:32.133 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:32.133 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:32.133 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:32.133 13:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.133 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:32.133 13:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.133 13:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.133 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:32.133 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:32.133 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:32.133 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:32.133 13:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:32.133 13:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.133 13:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.133 13:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.133 13:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:32.133 13:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:32.133 13:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:32.133 13:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:32.133 13:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:32.133 13:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.133 13:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.133 13:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.392 13:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:32.392 13:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:32.392 13:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:32.392 13:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:32.392 13:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.392 13:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.392 [2024-11-20 13:42:35.082851] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:32.392 13:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.392 13:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 43826af6-9bc3-4840-80e9-99a7075db78a '!=' 43826af6-9bc3-4840-80e9-99a7075db78a ']' 00:22:32.392 13:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:22:32.392 13:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:32.392 13:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:22:32.392 13:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65260 00:22:32.392 13:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65260 ']' 00:22:32.392 13:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65260 00:22:32.392 13:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:22:32.392 13:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.392 13:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65260 00:22:32.392 killing process with pid 65260 00:22:32.392 13:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:32.392 13:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:32.392 13:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65260' 00:22:32.392 13:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65260 00:22:32.392 [2024-11-20 13:42:35.155190] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:32.392 13:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65260 00:22:32.392 [2024-11-20 13:42:35.155330] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:32.393 [2024-11-20 13:42:35.155409] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:32.393 [2024-11-20 13:42:35.155428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:32.651 [2024-11-20 13:42:35.433415] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:33.587 13:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:22:33.587 00:22:33.587 real 0m5.821s 00:22:33.587 user 0m8.807s 00:22:33.587 sys 0m0.847s 00:22:33.587 13:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:33.587 ************************************ 00:22:33.587 END TEST raid_superblock_test 00:22:33.587 ************************************ 00:22:33.587 13:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.846 13:42:36 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:22:33.846 13:42:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:33.846 13:42:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:33.846 13:42:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:33.846 ************************************ 00:22:33.846 START TEST raid_read_error_test 00:22:33.846 ************************************ 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:22:33.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.L1BxUnbhYP 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65517 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65517 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65517 ']' 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.846 13:42:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.846 [2024-11-20 13:42:36.622215] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:22:33.846 [2024-11-20 13:42:36.622380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65517 ] 00:22:34.104 [2024-11-20 13:42:36.797026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.105 [2024-11-20 13:42:36.926760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.363 [2024-11-20 13:42:37.131112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:34.363 [2024-11-20 13:42:37.131197] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.930 BaseBdev1_malloc 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.930 true 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.930 [2024-11-20 13:42:37.636708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:34.930 [2024-11-20 13:42:37.636782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:34.930 [2024-11-20 13:42:37.636815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:34.930 [2024-11-20 13:42:37.636834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:34.930 [2024-11-20 13:42:37.639723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:34.930 [2024-11-20 13:42:37.639929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:34.930 BaseBdev1 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.930 BaseBdev2_malloc 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.930 true 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.930 [2024-11-20 13:42:37.693889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:34.930 [2024-11-20 13:42:37.693975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:34.930 [2024-11-20 13:42:37.694004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:34.930 [2024-11-20 13:42:37.694023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:34.930 [2024-11-20 13:42:37.696859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:34.930 [2024-11-20 13:42:37.696938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:34.930 BaseBdev2 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.930 BaseBdev3_malloc 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.930 true 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.930 [2024-11-20 13:42:37.759043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:34.930 [2024-11-20 13:42:37.759120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:34.930 [2024-11-20 13:42:37.759151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:34.930 [2024-11-20 13:42:37.759171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:34.930 [2024-11-20 13:42:37.762121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:34.930 [2024-11-20 13:42:37.762173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:34.930 BaseBdev3 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.930 [2024-11-20 13:42:37.767259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:34.930 [2024-11-20 13:42:37.769753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:34.930 [2024-11-20 13:42:37.769875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:34.930 [2024-11-20 13:42:37.770198] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:34.930 [2024-11-20 13:42:37.770222] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:34.930 [2024-11-20 13:42:37.770576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:22:34.930 [2024-11-20 13:42:37.770819] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:34.930 [2024-11-20 13:42:37.770843] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:34.930 [2024-11-20 13:42:37.771153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:22:34.930 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:34.931 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:34.931 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:34.931 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:34.931 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:34.931 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:34.931 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:34.931 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:34.931 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:34.931 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.931 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.931 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.931 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.931 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.931 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:34.931 "name": "raid_bdev1", 00:22:34.931 "uuid": "2ecd096d-97c9-404f-9b4c-e75a0f11f318", 00:22:34.931 "strip_size_kb": 64, 00:22:34.931 "state": "online", 00:22:34.931 "raid_level": "raid0", 00:22:34.931 "superblock": true, 00:22:34.931 "num_base_bdevs": 3, 00:22:34.931 "num_base_bdevs_discovered": 3, 00:22:34.931 "num_base_bdevs_operational": 3, 00:22:34.931 "base_bdevs_list": [ 00:22:34.931 { 00:22:34.931 "name": "BaseBdev1", 00:22:34.931 "uuid": "f6b76ed2-b9cb-5352-a696-d0008ba14c7a", 00:22:34.931 "is_configured": true, 00:22:34.931 "data_offset": 2048, 00:22:34.931 "data_size": 63488 00:22:34.931 }, 00:22:34.931 { 00:22:34.931 "name": "BaseBdev2", 00:22:34.931 "uuid": "92402336-0dd2-54fd-be8a-de41a089e059", 00:22:34.931 "is_configured": true, 00:22:34.931 "data_offset": 2048, 00:22:34.931 "data_size": 63488 00:22:34.931 }, 00:22:34.931 { 00:22:34.931 "name": "BaseBdev3", 00:22:34.931 "uuid": "13d3144e-ec6e-59fd-8380-ebb1b1703473", 00:22:34.931 "is_configured": true, 00:22:34.931 "data_offset": 2048, 00:22:34.931 "data_size": 63488 00:22:34.931 } 00:22:34.931 ] 00:22:34.931 }' 00:22:34.931 13:42:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:34.931 13:42:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.498 13:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:22:35.498 13:42:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:22:35.756 [2024-11-20 13:42:38.416787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:36.690 "name": "raid_bdev1", 00:22:36.690 "uuid": "2ecd096d-97c9-404f-9b4c-e75a0f11f318", 00:22:36.690 "strip_size_kb": 64, 00:22:36.690 "state": "online", 00:22:36.690 "raid_level": "raid0", 00:22:36.690 "superblock": true, 00:22:36.690 "num_base_bdevs": 3, 00:22:36.690 "num_base_bdevs_discovered": 3, 00:22:36.690 "num_base_bdevs_operational": 3, 00:22:36.690 "base_bdevs_list": [ 00:22:36.690 { 00:22:36.690 "name": "BaseBdev1", 00:22:36.690 "uuid": "f6b76ed2-b9cb-5352-a696-d0008ba14c7a", 00:22:36.690 "is_configured": true, 00:22:36.690 "data_offset": 2048, 00:22:36.690 "data_size": 63488 00:22:36.690 }, 00:22:36.690 { 00:22:36.690 "name": "BaseBdev2", 00:22:36.690 "uuid": "92402336-0dd2-54fd-be8a-de41a089e059", 00:22:36.690 "is_configured": true, 00:22:36.690 "data_offset": 2048, 00:22:36.690 "data_size": 63488 00:22:36.690 }, 00:22:36.690 { 00:22:36.690 "name": "BaseBdev3", 00:22:36.690 "uuid": "13d3144e-ec6e-59fd-8380-ebb1b1703473", 00:22:36.690 "is_configured": true, 00:22:36.690 "data_offset": 2048, 00:22:36.690 "data_size": 63488 00:22:36.690 } 00:22:36.690 ] 00:22:36.690 }' 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:36.690 13:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.948 13:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:36.948 13:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.948 13:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.948 [2024-11-20 13:42:39.745678] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:36.948 [2024-11-20 13:42:39.745855] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:36.948 [2024-11-20 13:42:39.749451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:36.948 [2024-11-20 13:42:39.749509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:36.948 [2024-11-20 13:42:39.749563] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:36.948 [2024-11-20 13:42:39.749577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:36.948 { 00:22:36.948 "results": [ 00:22:36.948 { 00:22:36.948 "job": "raid_bdev1", 00:22:36.948 "core_mask": "0x1", 00:22:36.948 "workload": "randrw", 00:22:36.948 "percentage": 50, 00:22:36.948 "status": "finished", 00:22:36.948 "queue_depth": 1, 00:22:36.948 "io_size": 131072, 00:22:36.948 "runtime": 1.326678, 00:22:36.948 "iops": 9807.956414442691, 00:22:36.948 "mibps": 1225.9945518053364, 00:22:36.948 "io_failed": 1, 00:22:36.948 "io_timeout": 0, 00:22:36.948 "avg_latency_us": 142.34011694599107, 00:22:36.948 "min_latency_us": 30.72, 00:22:36.948 "max_latency_us": 1846.9236363636364 00:22:36.948 } 00:22:36.948 ], 00:22:36.948 "core_count": 1 00:22:36.948 } 00:22:36.948 13:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.948 13:42:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65517 00:22:36.948 13:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65517 ']' 00:22:36.948 13:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65517 00:22:36.948 13:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:22:36.948 13:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:36.948 13:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65517 00:22:36.949 killing process with pid 65517 00:22:36.949 13:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:36.949 13:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:36.949 13:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65517' 00:22:36.949 13:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65517 00:22:36.949 13:42:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65517 00:22:36.949 [2024-11-20 13:42:39.787270] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:37.207 [2024-11-20 13:42:39.997719] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:38.583 13:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:22:38.583 13:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.L1BxUnbhYP 00:22:38.583 13:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:22:38.583 13:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:22:38.583 13:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:22:38.583 13:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:38.583 13:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:22:38.583 13:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:22:38.583 00:22:38.583 real 0m4.610s 00:22:38.583 user 0m5.685s 00:22:38.583 sys 0m0.537s 00:22:38.583 ************************************ 00:22:38.583 END TEST raid_read_error_test 00:22:38.583 ************************************ 00:22:38.583 13:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:38.583 13:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.583 13:42:41 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:22:38.583 13:42:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:38.583 13:42:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:38.583 13:42:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:38.583 ************************************ 00:22:38.583 START TEST raid_write_error_test 00:22:38.583 ************************************ 00:22:38.583 13:42:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:22:38.583 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:22:38.583 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:22:38.583 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:22:38.583 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:22:38.583 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:38.583 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:22:38.583 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IieaWpRhG4 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65664 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65664 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65664 ']' 00:22:38.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:38.584 13:42:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.584 [2024-11-20 13:42:41.280574] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:22:38.584 [2024-11-20 13:42:41.280737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65664 ] 00:22:38.843 [2024-11-20 13:42:41.500151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.843 [2024-11-20 13:42:41.633448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.101 [2024-11-20 13:42:41.839577] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:39.101 [2024-11-20 13:42:41.839920] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:39.360 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:39.360 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:22:39.360 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:39.360 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:39.360 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.360 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.360 BaseBdev1_malloc 00:22:39.360 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.360 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:22:39.360 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.360 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.360 true 00:22:39.360 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.360 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:39.360 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.360 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.360 [2024-11-20 13:42:42.268883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:39.360 [2024-11-20 13:42:42.268976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.360 [2024-11-20 13:42:42.269009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:39.360 [2024-11-20 13:42:42.269029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.360 [2024-11-20 13:42:42.271878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.360 [2024-11-20 13:42:42.271949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:39.360 BaseBdev1 00:22:39.360 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.360 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:39.360 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:39.360 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.360 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.618 BaseBdev2_malloc 00:22:39.618 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.618 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:22:39.618 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.618 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.618 true 00:22:39.618 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.618 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:39.618 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.618 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.618 [2024-11-20 13:42:42.325270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:39.619 [2024-11-20 13:42:42.325361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.619 [2024-11-20 13:42:42.325391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:39.619 [2024-11-20 13:42:42.325409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.619 [2024-11-20 13:42:42.328373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.619 [2024-11-20 13:42:42.328427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:39.619 BaseBdev2 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.619 BaseBdev3_malloc 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.619 true 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.619 [2024-11-20 13:42:42.396500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:39.619 [2024-11-20 13:42:42.396580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.619 [2024-11-20 13:42:42.396613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:39.619 [2024-11-20 13:42:42.396632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.619 [2024-11-20 13:42:42.399592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.619 [2024-11-20 13:42:42.399785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:39.619 BaseBdev3 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.619 [2024-11-20 13:42:42.404752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:39.619 [2024-11-20 13:42:42.407235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:39.619 [2024-11-20 13:42:42.407345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:39.619 [2024-11-20 13:42:42.407617] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:39.619 [2024-11-20 13:42:42.407640] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:39.619 [2024-11-20 13:42:42.407997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:22:39.619 [2024-11-20 13:42:42.408233] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:39.619 [2024-11-20 13:42:42.408257] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:39.619 [2024-11-20 13:42:42.408462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:39.619 "name": "raid_bdev1", 00:22:39.619 "uuid": "89453b74-8b6a-4a00-b24f-04427877fb04", 00:22:39.619 "strip_size_kb": 64, 00:22:39.619 "state": "online", 00:22:39.619 "raid_level": "raid0", 00:22:39.619 "superblock": true, 00:22:39.619 "num_base_bdevs": 3, 00:22:39.619 "num_base_bdevs_discovered": 3, 00:22:39.619 "num_base_bdevs_operational": 3, 00:22:39.619 "base_bdevs_list": [ 00:22:39.619 { 00:22:39.619 "name": "BaseBdev1", 00:22:39.619 "uuid": "1c9c446d-8ad8-5fe8-b4d2-82c5e789ba48", 00:22:39.619 "is_configured": true, 00:22:39.619 "data_offset": 2048, 00:22:39.619 "data_size": 63488 00:22:39.619 }, 00:22:39.619 { 00:22:39.619 "name": "BaseBdev2", 00:22:39.619 "uuid": "2a9cf1c9-530f-587b-8cfb-53d8ce47c7e3", 00:22:39.619 "is_configured": true, 00:22:39.619 "data_offset": 2048, 00:22:39.619 "data_size": 63488 00:22:39.619 }, 00:22:39.619 { 00:22:39.619 "name": "BaseBdev3", 00:22:39.619 "uuid": "fa8cd3cd-df2e-5366-ac47-ca346c40ad60", 00:22:39.619 "is_configured": true, 00:22:39.619 "data_offset": 2048, 00:22:39.619 "data_size": 63488 00:22:39.619 } 00:22:39.619 ] 00:22:39.619 }' 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:39.619 13:42:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.183 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:22:40.183 13:42:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:22:40.183 [2024-11-20 13:42:43.058405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:22:41.116 13:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:41.116 13:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.116 13:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.116 13:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.116 13:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:22:41.116 13:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:22:41.116 13:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:22:41.116 13:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:22:41.116 13:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:41.116 13:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:41.116 13:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:22:41.116 13:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:41.116 13:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:41.116 13:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:41.116 13:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:41.116 13:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:41.116 13:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:41.116 13:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.116 13:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.116 13:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.116 13:42:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.116 13:42:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.116 13:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:41.116 "name": "raid_bdev1", 00:22:41.116 "uuid": "89453b74-8b6a-4a00-b24f-04427877fb04", 00:22:41.116 "strip_size_kb": 64, 00:22:41.116 "state": "online", 00:22:41.116 "raid_level": "raid0", 00:22:41.116 "superblock": true, 00:22:41.116 "num_base_bdevs": 3, 00:22:41.116 "num_base_bdevs_discovered": 3, 00:22:41.116 "num_base_bdevs_operational": 3, 00:22:41.116 "base_bdevs_list": [ 00:22:41.116 { 00:22:41.116 "name": "BaseBdev1", 00:22:41.116 "uuid": "1c9c446d-8ad8-5fe8-b4d2-82c5e789ba48", 00:22:41.116 "is_configured": true, 00:22:41.116 "data_offset": 2048, 00:22:41.116 "data_size": 63488 00:22:41.116 }, 00:22:41.116 { 00:22:41.116 "name": "BaseBdev2", 00:22:41.116 "uuid": "2a9cf1c9-530f-587b-8cfb-53d8ce47c7e3", 00:22:41.116 "is_configured": true, 00:22:41.116 "data_offset": 2048, 00:22:41.116 "data_size": 63488 00:22:41.116 }, 00:22:41.116 { 00:22:41.116 "name": "BaseBdev3", 00:22:41.116 "uuid": "fa8cd3cd-df2e-5366-ac47-ca346c40ad60", 00:22:41.116 "is_configured": true, 00:22:41.116 "data_offset": 2048, 00:22:41.116 "data_size": 63488 00:22:41.116 } 00:22:41.116 ] 00:22:41.116 }' 00:22:41.116 13:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:41.116 13:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.685 13:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:41.685 13:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.685 13:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.685 [2024-11-20 13:42:44.488986] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:41.685 [2024-11-20 13:42:44.489196] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:41.685 [2024-11-20 13:42:44.492756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:41.685 { 00:22:41.685 "results": [ 00:22:41.685 { 00:22:41.685 "job": "raid_bdev1", 00:22:41.685 "core_mask": "0x1", 00:22:41.685 "workload": "randrw", 00:22:41.685 "percentage": 50, 00:22:41.685 "status": "finished", 00:22:41.685 "queue_depth": 1, 00:22:41.685 "io_size": 131072, 00:22:41.685 "runtime": 1.427903, 00:22:41.685 "iops": 9868.317385704771, 00:22:41.685 "mibps": 1233.5396732130964, 00:22:41.685 "io_failed": 1, 00:22:41.685 "io_timeout": 0, 00:22:41.685 "avg_latency_us": 140.93233762547416, 00:22:41.685 "min_latency_us": 29.672727272727272, 00:22:41.685 "max_latency_us": 1839.4763636363637 00:22:41.685 } 00:22:41.685 ], 00:22:41.685 "core_count": 1 00:22:41.685 } 00:22:41.685 [2024-11-20 13:42:44.492989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:41.685 [2024-11-20 13:42:44.493077] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:41.685 [2024-11-20 13:42:44.493094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:41.685 13:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.685 13:42:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65664 00:22:41.685 13:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65664 ']' 00:22:41.685 13:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65664 00:22:41.685 13:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:22:41.685 13:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:41.685 13:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65664 00:22:41.685 killing process with pid 65664 00:22:41.685 13:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:41.685 13:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:41.685 13:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65664' 00:22:41.685 13:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65664 00:22:41.685 13:42:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65664 00:22:41.685 [2024-11-20 13:42:44.519629] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:41.944 [2024-11-20 13:42:44.732424] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:43.320 13:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:22:43.320 13:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IieaWpRhG4 00:22:43.320 13:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:22:43.320 13:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:22:43.320 13:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:22:43.320 13:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:43.320 13:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:22:43.320 13:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:22:43.320 ************************************ 00:22:43.320 END TEST raid_write_error_test 00:22:43.320 ************************************ 00:22:43.320 00:22:43.320 real 0m4.712s 00:22:43.320 user 0m5.847s 00:22:43.320 sys 0m0.504s 00:22:43.320 13:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:43.320 13:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.320 13:42:45 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:22:43.320 13:42:45 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:22:43.320 13:42:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:43.320 13:42:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:43.320 13:42:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:43.320 ************************************ 00:22:43.320 START TEST raid_state_function_test 00:22:43.320 ************************************ 00:22:43.320 13:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:22:43.320 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:22:43.320 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:22:43.320 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:22:43.320 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:43.320 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:43.320 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:43.320 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:43.320 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:43.320 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:43.320 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:43.320 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:43.320 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:43.320 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:43.320 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:43.320 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:43.320 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:43.320 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:43.320 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:43.320 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:43.321 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:43.321 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:43.321 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:22:43.321 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:43.321 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:43.321 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:22:43.321 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:22:43.321 Process raid pid: 65808 00:22:43.321 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65808 00:22:43.321 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65808' 00:22:43.321 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65808 00:22:43.321 13:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65808 ']' 00:22:43.321 13:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.321 13:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.321 13:42:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:43.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.321 13:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.321 13:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.321 13:42:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.321 [2024-11-20 13:42:46.074780] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:22:43.321 [2024-11-20 13:42:46.075327] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.579 [2024-11-20 13:42:46.256759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.579 [2024-11-20 13:42:46.392608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.838 [2024-11-20 13:42:46.605013] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:43.838 [2024-11-20 13:42:46.605297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.406 [2024-11-20 13:42:47.122378] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:44.406 [2024-11-20 13:42:47.122454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:44.406 [2024-11-20 13:42:47.122473] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:44.406 [2024-11-20 13:42:47.122491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:44.406 [2024-11-20 13:42:47.122501] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:44.406 [2024-11-20 13:42:47.122516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:44.406 "name": "Existed_Raid", 00:22:44.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.406 "strip_size_kb": 64, 00:22:44.406 "state": "configuring", 00:22:44.406 "raid_level": "concat", 00:22:44.406 "superblock": false, 00:22:44.406 "num_base_bdevs": 3, 00:22:44.406 "num_base_bdevs_discovered": 0, 00:22:44.406 "num_base_bdevs_operational": 3, 00:22:44.406 "base_bdevs_list": [ 00:22:44.406 { 00:22:44.406 "name": "BaseBdev1", 00:22:44.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.406 "is_configured": false, 00:22:44.406 "data_offset": 0, 00:22:44.406 "data_size": 0 00:22:44.406 }, 00:22:44.406 { 00:22:44.406 "name": "BaseBdev2", 00:22:44.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.406 "is_configured": false, 00:22:44.406 "data_offset": 0, 00:22:44.406 "data_size": 0 00:22:44.406 }, 00:22:44.406 { 00:22:44.406 "name": "BaseBdev3", 00:22:44.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.406 "is_configured": false, 00:22:44.406 "data_offset": 0, 00:22:44.406 "data_size": 0 00:22:44.406 } 00:22:44.406 ] 00:22:44.406 }' 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:44.406 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.975 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:44.975 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.975 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.975 [2024-11-20 13:42:47.630468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:44.975 [2024-11-20 13:42:47.630518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:44.975 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.975 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:44.975 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.975 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.975 [2024-11-20 13:42:47.638476] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:44.975 [2024-11-20 13:42:47.638541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:44.975 [2024-11-20 13:42:47.638557] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:44.975 [2024-11-20 13:42:47.638574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:44.975 [2024-11-20 13:42:47.638584] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:44.975 [2024-11-20 13:42:47.638599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.976 [2024-11-20 13:42:47.684213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:44.976 BaseBdev1 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.976 [ 00:22:44.976 { 00:22:44.976 "name": "BaseBdev1", 00:22:44.976 "aliases": [ 00:22:44.976 "bcfea319-4926-4778-bf54-7b76a366f03a" 00:22:44.976 ], 00:22:44.976 "product_name": "Malloc disk", 00:22:44.976 "block_size": 512, 00:22:44.976 "num_blocks": 65536, 00:22:44.976 "uuid": "bcfea319-4926-4778-bf54-7b76a366f03a", 00:22:44.976 "assigned_rate_limits": { 00:22:44.976 "rw_ios_per_sec": 0, 00:22:44.976 "rw_mbytes_per_sec": 0, 00:22:44.976 "r_mbytes_per_sec": 0, 00:22:44.976 "w_mbytes_per_sec": 0 00:22:44.976 }, 00:22:44.976 "claimed": true, 00:22:44.976 "claim_type": "exclusive_write", 00:22:44.976 "zoned": false, 00:22:44.976 "supported_io_types": { 00:22:44.976 "read": true, 00:22:44.976 "write": true, 00:22:44.976 "unmap": true, 00:22:44.976 "flush": true, 00:22:44.976 "reset": true, 00:22:44.976 "nvme_admin": false, 00:22:44.976 "nvme_io": false, 00:22:44.976 "nvme_io_md": false, 00:22:44.976 "write_zeroes": true, 00:22:44.976 "zcopy": true, 00:22:44.976 "get_zone_info": false, 00:22:44.976 "zone_management": false, 00:22:44.976 "zone_append": false, 00:22:44.976 "compare": false, 00:22:44.976 "compare_and_write": false, 00:22:44.976 "abort": true, 00:22:44.976 "seek_hole": false, 00:22:44.976 "seek_data": false, 00:22:44.976 "copy": true, 00:22:44.976 "nvme_iov_md": false 00:22:44.976 }, 00:22:44.976 "memory_domains": [ 00:22:44.976 { 00:22:44.976 "dma_device_id": "system", 00:22:44.976 "dma_device_type": 1 00:22:44.976 }, 00:22:44.976 { 00:22:44.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.976 "dma_device_type": 2 00:22:44.976 } 00:22:44.976 ], 00:22:44.976 "driver_specific": {} 00:22:44.976 } 00:22:44.976 ] 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:44.976 "name": "Existed_Raid", 00:22:44.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.976 "strip_size_kb": 64, 00:22:44.976 "state": "configuring", 00:22:44.976 "raid_level": "concat", 00:22:44.976 "superblock": false, 00:22:44.976 "num_base_bdevs": 3, 00:22:44.976 "num_base_bdevs_discovered": 1, 00:22:44.976 "num_base_bdevs_operational": 3, 00:22:44.976 "base_bdevs_list": [ 00:22:44.976 { 00:22:44.976 "name": "BaseBdev1", 00:22:44.976 "uuid": "bcfea319-4926-4778-bf54-7b76a366f03a", 00:22:44.976 "is_configured": true, 00:22:44.976 "data_offset": 0, 00:22:44.976 "data_size": 65536 00:22:44.976 }, 00:22:44.976 { 00:22:44.976 "name": "BaseBdev2", 00:22:44.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.976 "is_configured": false, 00:22:44.976 "data_offset": 0, 00:22:44.976 "data_size": 0 00:22:44.976 }, 00:22:44.976 { 00:22:44.976 "name": "BaseBdev3", 00:22:44.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.976 "is_configured": false, 00:22:44.976 "data_offset": 0, 00:22:44.976 "data_size": 0 00:22:44.976 } 00:22:44.976 ] 00:22:44.976 }' 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:44.976 13:42:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.545 [2024-11-20 13:42:48.276476] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:45.545 [2024-11-20 13:42:48.276548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.545 [2024-11-20 13:42:48.284565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:45.545 [2024-11-20 13:42:48.287256] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:45.545 [2024-11-20 13:42:48.287464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:45.545 [2024-11-20 13:42:48.287620] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:45.545 [2024-11-20 13:42:48.287772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:45.545 "name": "Existed_Raid", 00:22:45.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.545 "strip_size_kb": 64, 00:22:45.545 "state": "configuring", 00:22:45.545 "raid_level": "concat", 00:22:45.545 "superblock": false, 00:22:45.545 "num_base_bdevs": 3, 00:22:45.545 "num_base_bdevs_discovered": 1, 00:22:45.545 "num_base_bdevs_operational": 3, 00:22:45.545 "base_bdevs_list": [ 00:22:45.545 { 00:22:45.545 "name": "BaseBdev1", 00:22:45.545 "uuid": "bcfea319-4926-4778-bf54-7b76a366f03a", 00:22:45.545 "is_configured": true, 00:22:45.545 "data_offset": 0, 00:22:45.545 "data_size": 65536 00:22:45.545 }, 00:22:45.545 { 00:22:45.545 "name": "BaseBdev2", 00:22:45.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.545 "is_configured": false, 00:22:45.545 "data_offset": 0, 00:22:45.545 "data_size": 0 00:22:45.545 }, 00:22:45.545 { 00:22:45.545 "name": "BaseBdev3", 00:22:45.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.545 "is_configured": false, 00:22:45.545 "data_offset": 0, 00:22:45.545 "data_size": 0 00:22:45.545 } 00:22:45.545 ] 00:22:45.545 }' 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:45.545 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.113 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:46.113 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.113 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.113 [2024-11-20 13:42:48.816717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:46.113 BaseBdev2 00:22:46.113 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.113 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:46.113 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:46.113 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:46.113 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:46.113 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:46.113 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:46.113 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.114 [ 00:22:46.114 { 00:22:46.114 "name": "BaseBdev2", 00:22:46.114 "aliases": [ 00:22:46.114 "c0c6f6a2-9c28-4950-a884-c109fe2938dd" 00:22:46.114 ], 00:22:46.114 "product_name": "Malloc disk", 00:22:46.114 "block_size": 512, 00:22:46.114 "num_blocks": 65536, 00:22:46.114 "uuid": "c0c6f6a2-9c28-4950-a884-c109fe2938dd", 00:22:46.114 "assigned_rate_limits": { 00:22:46.114 "rw_ios_per_sec": 0, 00:22:46.114 "rw_mbytes_per_sec": 0, 00:22:46.114 "r_mbytes_per_sec": 0, 00:22:46.114 "w_mbytes_per_sec": 0 00:22:46.114 }, 00:22:46.114 "claimed": true, 00:22:46.114 "claim_type": "exclusive_write", 00:22:46.114 "zoned": false, 00:22:46.114 "supported_io_types": { 00:22:46.114 "read": true, 00:22:46.114 "write": true, 00:22:46.114 "unmap": true, 00:22:46.114 "flush": true, 00:22:46.114 "reset": true, 00:22:46.114 "nvme_admin": false, 00:22:46.114 "nvme_io": false, 00:22:46.114 "nvme_io_md": false, 00:22:46.114 "write_zeroes": true, 00:22:46.114 "zcopy": true, 00:22:46.114 "get_zone_info": false, 00:22:46.114 "zone_management": false, 00:22:46.114 "zone_append": false, 00:22:46.114 "compare": false, 00:22:46.114 "compare_and_write": false, 00:22:46.114 "abort": true, 00:22:46.114 "seek_hole": false, 00:22:46.114 "seek_data": false, 00:22:46.114 "copy": true, 00:22:46.114 "nvme_iov_md": false 00:22:46.114 }, 00:22:46.114 "memory_domains": [ 00:22:46.114 { 00:22:46.114 "dma_device_id": "system", 00:22:46.114 "dma_device_type": 1 00:22:46.114 }, 00:22:46.114 { 00:22:46.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:46.114 "dma_device_type": 2 00:22:46.114 } 00:22:46.114 ], 00:22:46.114 "driver_specific": {} 00:22:46.114 } 00:22:46.114 ] 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:46.114 "name": "Existed_Raid", 00:22:46.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.114 "strip_size_kb": 64, 00:22:46.114 "state": "configuring", 00:22:46.114 "raid_level": "concat", 00:22:46.114 "superblock": false, 00:22:46.114 "num_base_bdevs": 3, 00:22:46.114 "num_base_bdevs_discovered": 2, 00:22:46.114 "num_base_bdevs_operational": 3, 00:22:46.114 "base_bdevs_list": [ 00:22:46.114 { 00:22:46.114 "name": "BaseBdev1", 00:22:46.114 "uuid": "bcfea319-4926-4778-bf54-7b76a366f03a", 00:22:46.114 "is_configured": true, 00:22:46.114 "data_offset": 0, 00:22:46.114 "data_size": 65536 00:22:46.114 }, 00:22:46.114 { 00:22:46.114 "name": "BaseBdev2", 00:22:46.114 "uuid": "c0c6f6a2-9c28-4950-a884-c109fe2938dd", 00:22:46.114 "is_configured": true, 00:22:46.114 "data_offset": 0, 00:22:46.114 "data_size": 65536 00:22:46.114 }, 00:22:46.114 { 00:22:46.114 "name": "BaseBdev3", 00:22:46.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.114 "is_configured": false, 00:22:46.114 "data_offset": 0, 00:22:46.114 "data_size": 0 00:22:46.114 } 00:22:46.114 ] 00:22:46.114 }' 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:46.114 13:42:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.720 [2024-11-20 13:42:49.405605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:46.720 [2024-11-20 13:42:49.405680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:46.720 [2024-11-20 13:42:49.405700] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:22:46.720 [2024-11-20 13:42:49.406088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:46.720 [2024-11-20 13:42:49.406334] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:46.720 [2024-11-20 13:42:49.406352] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:46.720 [2024-11-20 13:42:49.406686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:46.720 BaseBdev3 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.720 [ 00:22:46.720 { 00:22:46.720 "name": "BaseBdev3", 00:22:46.720 "aliases": [ 00:22:46.720 "b7b3ea51-67f9-4884-906b-a8ffd12cef27" 00:22:46.720 ], 00:22:46.720 "product_name": "Malloc disk", 00:22:46.720 "block_size": 512, 00:22:46.720 "num_blocks": 65536, 00:22:46.720 "uuid": "b7b3ea51-67f9-4884-906b-a8ffd12cef27", 00:22:46.720 "assigned_rate_limits": { 00:22:46.720 "rw_ios_per_sec": 0, 00:22:46.720 "rw_mbytes_per_sec": 0, 00:22:46.720 "r_mbytes_per_sec": 0, 00:22:46.720 "w_mbytes_per_sec": 0 00:22:46.720 }, 00:22:46.720 "claimed": true, 00:22:46.720 "claim_type": "exclusive_write", 00:22:46.720 "zoned": false, 00:22:46.720 "supported_io_types": { 00:22:46.720 "read": true, 00:22:46.720 "write": true, 00:22:46.720 "unmap": true, 00:22:46.720 "flush": true, 00:22:46.720 "reset": true, 00:22:46.720 "nvme_admin": false, 00:22:46.720 "nvme_io": false, 00:22:46.720 "nvme_io_md": false, 00:22:46.720 "write_zeroes": true, 00:22:46.720 "zcopy": true, 00:22:46.720 "get_zone_info": false, 00:22:46.720 "zone_management": false, 00:22:46.720 "zone_append": false, 00:22:46.720 "compare": false, 00:22:46.720 "compare_and_write": false, 00:22:46.720 "abort": true, 00:22:46.720 "seek_hole": false, 00:22:46.720 "seek_data": false, 00:22:46.720 "copy": true, 00:22:46.720 "nvme_iov_md": false 00:22:46.720 }, 00:22:46.720 "memory_domains": [ 00:22:46.720 { 00:22:46.720 "dma_device_id": "system", 00:22:46.720 "dma_device_type": 1 00:22:46.720 }, 00:22:46.720 { 00:22:46.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:46.720 "dma_device_type": 2 00:22:46.720 } 00:22:46.720 ], 00:22:46.720 "driver_specific": {} 00:22:46.720 } 00:22:46.720 ] 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.720 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:46.720 "name": "Existed_Raid", 00:22:46.720 "uuid": "c50675bb-109c-4cd0-a814-31d822f8e353", 00:22:46.720 "strip_size_kb": 64, 00:22:46.720 "state": "online", 00:22:46.720 "raid_level": "concat", 00:22:46.720 "superblock": false, 00:22:46.720 "num_base_bdevs": 3, 00:22:46.720 "num_base_bdevs_discovered": 3, 00:22:46.721 "num_base_bdevs_operational": 3, 00:22:46.721 "base_bdevs_list": [ 00:22:46.721 { 00:22:46.721 "name": "BaseBdev1", 00:22:46.721 "uuid": "bcfea319-4926-4778-bf54-7b76a366f03a", 00:22:46.721 "is_configured": true, 00:22:46.721 "data_offset": 0, 00:22:46.721 "data_size": 65536 00:22:46.721 }, 00:22:46.721 { 00:22:46.721 "name": "BaseBdev2", 00:22:46.721 "uuid": "c0c6f6a2-9c28-4950-a884-c109fe2938dd", 00:22:46.721 "is_configured": true, 00:22:46.721 "data_offset": 0, 00:22:46.721 "data_size": 65536 00:22:46.721 }, 00:22:46.721 { 00:22:46.721 "name": "BaseBdev3", 00:22:46.721 "uuid": "b7b3ea51-67f9-4884-906b-a8ffd12cef27", 00:22:46.721 "is_configured": true, 00:22:46.721 "data_offset": 0, 00:22:46.721 "data_size": 65536 00:22:46.721 } 00:22:46.721 ] 00:22:46.721 }' 00:22:46.721 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:46.721 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.290 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:47.290 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:47.290 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:47.291 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:47.291 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:47.291 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:47.291 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:47.291 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:47.291 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.291 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.291 [2024-11-20 13:42:49.958200] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:47.291 13:42:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.291 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:47.291 "name": "Existed_Raid", 00:22:47.291 "aliases": [ 00:22:47.291 "c50675bb-109c-4cd0-a814-31d822f8e353" 00:22:47.291 ], 00:22:47.291 "product_name": "Raid Volume", 00:22:47.291 "block_size": 512, 00:22:47.291 "num_blocks": 196608, 00:22:47.291 "uuid": "c50675bb-109c-4cd0-a814-31d822f8e353", 00:22:47.291 "assigned_rate_limits": { 00:22:47.291 "rw_ios_per_sec": 0, 00:22:47.291 "rw_mbytes_per_sec": 0, 00:22:47.291 "r_mbytes_per_sec": 0, 00:22:47.291 "w_mbytes_per_sec": 0 00:22:47.291 }, 00:22:47.291 "claimed": false, 00:22:47.291 "zoned": false, 00:22:47.291 "supported_io_types": { 00:22:47.291 "read": true, 00:22:47.291 "write": true, 00:22:47.291 "unmap": true, 00:22:47.291 "flush": true, 00:22:47.291 "reset": true, 00:22:47.291 "nvme_admin": false, 00:22:47.291 "nvme_io": false, 00:22:47.291 "nvme_io_md": false, 00:22:47.291 "write_zeroes": true, 00:22:47.291 "zcopy": false, 00:22:47.291 "get_zone_info": false, 00:22:47.291 "zone_management": false, 00:22:47.291 "zone_append": false, 00:22:47.291 "compare": false, 00:22:47.291 "compare_and_write": false, 00:22:47.291 "abort": false, 00:22:47.291 "seek_hole": false, 00:22:47.291 "seek_data": false, 00:22:47.291 "copy": false, 00:22:47.291 "nvme_iov_md": false 00:22:47.291 }, 00:22:47.291 "memory_domains": [ 00:22:47.291 { 00:22:47.291 "dma_device_id": "system", 00:22:47.291 "dma_device_type": 1 00:22:47.291 }, 00:22:47.291 { 00:22:47.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.291 "dma_device_type": 2 00:22:47.291 }, 00:22:47.291 { 00:22:47.291 "dma_device_id": "system", 00:22:47.291 "dma_device_type": 1 00:22:47.291 }, 00:22:47.291 { 00:22:47.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.291 "dma_device_type": 2 00:22:47.291 }, 00:22:47.291 { 00:22:47.291 "dma_device_id": "system", 00:22:47.291 "dma_device_type": 1 00:22:47.291 }, 00:22:47.291 { 00:22:47.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.291 "dma_device_type": 2 00:22:47.291 } 00:22:47.291 ], 00:22:47.291 "driver_specific": { 00:22:47.291 "raid": { 00:22:47.291 "uuid": "c50675bb-109c-4cd0-a814-31d822f8e353", 00:22:47.291 "strip_size_kb": 64, 00:22:47.291 "state": "online", 00:22:47.291 "raid_level": "concat", 00:22:47.291 "superblock": false, 00:22:47.291 "num_base_bdevs": 3, 00:22:47.291 "num_base_bdevs_discovered": 3, 00:22:47.291 "num_base_bdevs_operational": 3, 00:22:47.291 "base_bdevs_list": [ 00:22:47.291 { 00:22:47.291 "name": "BaseBdev1", 00:22:47.291 "uuid": "bcfea319-4926-4778-bf54-7b76a366f03a", 00:22:47.291 "is_configured": true, 00:22:47.291 "data_offset": 0, 00:22:47.291 "data_size": 65536 00:22:47.291 }, 00:22:47.291 { 00:22:47.291 "name": "BaseBdev2", 00:22:47.291 "uuid": "c0c6f6a2-9c28-4950-a884-c109fe2938dd", 00:22:47.291 "is_configured": true, 00:22:47.291 "data_offset": 0, 00:22:47.291 "data_size": 65536 00:22:47.291 }, 00:22:47.291 { 00:22:47.291 "name": "BaseBdev3", 00:22:47.291 "uuid": "b7b3ea51-67f9-4884-906b-a8ffd12cef27", 00:22:47.291 "is_configured": true, 00:22:47.291 "data_offset": 0, 00:22:47.291 "data_size": 65536 00:22:47.291 } 00:22:47.291 ] 00:22:47.291 } 00:22:47.291 } 00:22:47.291 }' 00:22:47.291 13:42:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:47.291 BaseBdev2 00:22:47.291 BaseBdev3' 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.291 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.550 [2024-11-20 13:42:50.237978] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:47.550 [2024-11-20 13:42:50.238017] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:47.550 [2024-11-20 13:42:50.238092] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:47.550 "name": "Existed_Raid", 00:22:47.550 "uuid": "c50675bb-109c-4cd0-a814-31d822f8e353", 00:22:47.550 "strip_size_kb": 64, 00:22:47.550 "state": "offline", 00:22:47.550 "raid_level": "concat", 00:22:47.550 "superblock": false, 00:22:47.550 "num_base_bdevs": 3, 00:22:47.550 "num_base_bdevs_discovered": 2, 00:22:47.550 "num_base_bdevs_operational": 2, 00:22:47.550 "base_bdevs_list": [ 00:22:47.550 { 00:22:47.550 "name": null, 00:22:47.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.550 "is_configured": false, 00:22:47.550 "data_offset": 0, 00:22:47.550 "data_size": 65536 00:22:47.550 }, 00:22:47.550 { 00:22:47.550 "name": "BaseBdev2", 00:22:47.550 "uuid": "c0c6f6a2-9c28-4950-a884-c109fe2938dd", 00:22:47.550 "is_configured": true, 00:22:47.550 "data_offset": 0, 00:22:47.550 "data_size": 65536 00:22:47.550 }, 00:22:47.550 { 00:22:47.550 "name": "BaseBdev3", 00:22:47.550 "uuid": "b7b3ea51-67f9-4884-906b-a8ffd12cef27", 00:22:47.550 "is_configured": true, 00:22:47.550 "data_offset": 0, 00:22:47.550 "data_size": 65536 00:22:47.550 } 00:22:47.550 ] 00:22:47.550 }' 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:47.550 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.118 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:48.118 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:48.118 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.118 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.118 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.118 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:48.118 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.118 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:48.118 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:48.118 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:48.118 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.118 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.118 [2024-11-20 13:42:50.853521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:48.118 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.118 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:48.118 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:48.118 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.118 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.118 13:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.118 13:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:48.118 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.376 [2024-11-20 13:42:51.045332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:48.376 [2024-11-20 13:42:51.045449] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.376 BaseBdev2 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.376 [ 00:22:48.376 { 00:22:48.376 "name": "BaseBdev2", 00:22:48.376 "aliases": [ 00:22:48.376 "04e3e6da-1673-408f-b817-fac17f4e843c" 00:22:48.376 ], 00:22:48.376 "product_name": "Malloc disk", 00:22:48.376 "block_size": 512, 00:22:48.376 "num_blocks": 65536, 00:22:48.376 "uuid": "04e3e6da-1673-408f-b817-fac17f4e843c", 00:22:48.376 "assigned_rate_limits": { 00:22:48.376 "rw_ios_per_sec": 0, 00:22:48.376 "rw_mbytes_per_sec": 0, 00:22:48.376 "r_mbytes_per_sec": 0, 00:22:48.376 "w_mbytes_per_sec": 0 00:22:48.376 }, 00:22:48.376 "claimed": false, 00:22:48.376 "zoned": false, 00:22:48.376 "supported_io_types": { 00:22:48.376 "read": true, 00:22:48.376 "write": true, 00:22:48.376 "unmap": true, 00:22:48.376 "flush": true, 00:22:48.376 "reset": true, 00:22:48.376 "nvme_admin": false, 00:22:48.376 "nvme_io": false, 00:22:48.376 "nvme_io_md": false, 00:22:48.376 "write_zeroes": true, 00:22:48.376 "zcopy": true, 00:22:48.376 "get_zone_info": false, 00:22:48.376 "zone_management": false, 00:22:48.376 "zone_append": false, 00:22:48.376 "compare": false, 00:22:48.376 "compare_and_write": false, 00:22:48.376 "abort": true, 00:22:48.376 "seek_hole": false, 00:22:48.376 "seek_data": false, 00:22:48.376 "copy": true, 00:22:48.376 "nvme_iov_md": false 00:22:48.376 }, 00:22:48.376 "memory_domains": [ 00:22:48.376 { 00:22:48.376 "dma_device_id": "system", 00:22:48.376 "dma_device_type": 1 00:22:48.376 }, 00:22:48.376 { 00:22:48.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.376 "dma_device_type": 2 00:22:48.376 } 00:22:48.376 ], 00:22:48.376 "driver_specific": {} 00:22:48.376 } 00:22:48.376 ] 00:22:48.376 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.634 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:48.634 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:48.634 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:48.634 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:48.634 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.634 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.634 BaseBdev3 00:22:48.634 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.634 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:48.634 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:48.634 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:48.634 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:48.634 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:48.634 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:48.634 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:48.634 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.634 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.634 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.634 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:48.634 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.635 [ 00:22:48.635 { 00:22:48.635 "name": "BaseBdev3", 00:22:48.635 "aliases": [ 00:22:48.635 "ec6cd2b2-8940-4649-85de-84bcd6da4374" 00:22:48.635 ], 00:22:48.635 "product_name": "Malloc disk", 00:22:48.635 "block_size": 512, 00:22:48.635 "num_blocks": 65536, 00:22:48.635 "uuid": "ec6cd2b2-8940-4649-85de-84bcd6da4374", 00:22:48.635 "assigned_rate_limits": { 00:22:48.635 "rw_ios_per_sec": 0, 00:22:48.635 "rw_mbytes_per_sec": 0, 00:22:48.635 "r_mbytes_per_sec": 0, 00:22:48.635 "w_mbytes_per_sec": 0 00:22:48.635 }, 00:22:48.635 "claimed": false, 00:22:48.635 "zoned": false, 00:22:48.635 "supported_io_types": { 00:22:48.635 "read": true, 00:22:48.635 "write": true, 00:22:48.635 "unmap": true, 00:22:48.635 "flush": true, 00:22:48.635 "reset": true, 00:22:48.635 "nvme_admin": false, 00:22:48.635 "nvme_io": false, 00:22:48.635 "nvme_io_md": false, 00:22:48.635 "write_zeroes": true, 00:22:48.635 "zcopy": true, 00:22:48.635 "get_zone_info": false, 00:22:48.635 "zone_management": false, 00:22:48.635 "zone_append": false, 00:22:48.635 "compare": false, 00:22:48.635 "compare_and_write": false, 00:22:48.635 "abort": true, 00:22:48.635 "seek_hole": false, 00:22:48.635 "seek_data": false, 00:22:48.635 "copy": true, 00:22:48.635 "nvme_iov_md": false 00:22:48.635 }, 00:22:48.635 "memory_domains": [ 00:22:48.635 { 00:22:48.635 "dma_device_id": "system", 00:22:48.635 "dma_device_type": 1 00:22:48.635 }, 00:22:48.635 { 00:22:48.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.635 "dma_device_type": 2 00:22:48.635 } 00:22:48.635 ], 00:22:48.635 "driver_specific": {} 00:22:48.635 } 00:22:48.635 ] 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.635 [2024-11-20 13:42:51.376992] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:48.635 [2024-11-20 13:42:51.377259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:48.635 [2024-11-20 13:42:51.377457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:48.635 [2024-11-20 13:42:51.381081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:48.635 "name": "Existed_Raid", 00:22:48.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.635 "strip_size_kb": 64, 00:22:48.635 "state": "configuring", 00:22:48.635 "raid_level": "concat", 00:22:48.635 "superblock": false, 00:22:48.635 "num_base_bdevs": 3, 00:22:48.635 "num_base_bdevs_discovered": 2, 00:22:48.635 "num_base_bdevs_operational": 3, 00:22:48.635 "base_bdevs_list": [ 00:22:48.635 { 00:22:48.635 "name": "BaseBdev1", 00:22:48.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.635 "is_configured": false, 00:22:48.635 "data_offset": 0, 00:22:48.635 "data_size": 0 00:22:48.635 }, 00:22:48.635 { 00:22:48.635 "name": "BaseBdev2", 00:22:48.635 "uuid": "04e3e6da-1673-408f-b817-fac17f4e843c", 00:22:48.635 "is_configured": true, 00:22:48.635 "data_offset": 0, 00:22:48.635 "data_size": 65536 00:22:48.635 }, 00:22:48.635 { 00:22:48.635 "name": "BaseBdev3", 00:22:48.635 "uuid": "ec6cd2b2-8940-4649-85de-84bcd6da4374", 00:22:48.635 "is_configured": true, 00:22:48.635 "data_offset": 0, 00:22:48.635 "data_size": 65536 00:22:48.635 } 00:22:48.635 ] 00:22:48.635 }' 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:48.635 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.200 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:49.200 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.200 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.200 [2024-11-20 13:42:51.897805] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:49.200 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.200 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:49.200 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:49.200 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:49.200 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:49.200 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:49.200 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:49.200 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:49.200 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:49.200 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:49.200 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:49.200 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.200 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.200 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.200 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.200 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.200 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:49.200 "name": "Existed_Raid", 00:22:49.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.200 "strip_size_kb": 64, 00:22:49.200 "state": "configuring", 00:22:49.200 "raid_level": "concat", 00:22:49.200 "superblock": false, 00:22:49.200 "num_base_bdevs": 3, 00:22:49.200 "num_base_bdevs_discovered": 1, 00:22:49.200 "num_base_bdevs_operational": 3, 00:22:49.200 "base_bdevs_list": [ 00:22:49.200 { 00:22:49.200 "name": "BaseBdev1", 00:22:49.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.200 "is_configured": false, 00:22:49.200 "data_offset": 0, 00:22:49.200 "data_size": 0 00:22:49.200 }, 00:22:49.200 { 00:22:49.200 "name": null, 00:22:49.200 "uuid": "04e3e6da-1673-408f-b817-fac17f4e843c", 00:22:49.200 "is_configured": false, 00:22:49.200 "data_offset": 0, 00:22:49.200 "data_size": 65536 00:22:49.200 }, 00:22:49.200 { 00:22:49.200 "name": "BaseBdev3", 00:22:49.200 "uuid": "ec6cd2b2-8940-4649-85de-84bcd6da4374", 00:22:49.200 "is_configured": true, 00:22:49.200 "data_offset": 0, 00:22:49.200 "data_size": 65536 00:22:49.200 } 00:22:49.200 ] 00:22:49.200 }' 00:22:49.200 13:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:49.200 13:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.767 [2024-11-20 13:42:52.555307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:49.767 BaseBdev1 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.767 [ 00:22:49.767 { 00:22:49.767 "name": "BaseBdev1", 00:22:49.767 "aliases": [ 00:22:49.767 "a89ad1ec-f10f-4d33-8ef2-40104ac1f2d9" 00:22:49.767 ], 00:22:49.767 "product_name": "Malloc disk", 00:22:49.767 "block_size": 512, 00:22:49.767 "num_blocks": 65536, 00:22:49.767 "uuid": "a89ad1ec-f10f-4d33-8ef2-40104ac1f2d9", 00:22:49.767 "assigned_rate_limits": { 00:22:49.767 "rw_ios_per_sec": 0, 00:22:49.767 "rw_mbytes_per_sec": 0, 00:22:49.767 "r_mbytes_per_sec": 0, 00:22:49.767 "w_mbytes_per_sec": 0 00:22:49.767 }, 00:22:49.767 "claimed": true, 00:22:49.767 "claim_type": "exclusive_write", 00:22:49.767 "zoned": false, 00:22:49.767 "supported_io_types": { 00:22:49.767 "read": true, 00:22:49.767 "write": true, 00:22:49.767 "unmap": true, 00:22:49.767 "flush": true, 00:22:49.767 "reset": true, 00:22:49.767 "nvme_admin": false, 00:22:49.767 "nvme_io": false, 00:22:49.767 "nvme_io_md": false, 00:22:49.767 "write_zeroes": true, 00:22:49.767 "zcopy": true, 00:22:49.767 "get_zone_info": false, 00:22:49.767 "zone_management": false, 00:22:49.767 "zone_append": false, 00:22:49.767 "compare": false, 00:22:49.767 "compare_and_write": false, 00:22:49.767 "abort": true, 00:22:49.767 "seek_hole": false, 00:22:49.767 "seek_data": false, 00:22:49.767 "copy": true, 00:22:49.767 "nvme_iov_md": false 00:22:49.767 }, 00:22:49.767 "memory_domains": [ 00:22:49.767 { 00:22:49.767 "dma_device_id": "system", 00:22:49.767 "dma_device_type": 1 00:22:49.767 }, 00:22:49.767 { 00:22:49.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:49.767 "dma_device_type": 2 00:22:49.767 } 00:22:49.767 ], 00:22:49.767 "driver_specific": {} 00:22:49.767 } 00:22:49.767 ] 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:49.767 13:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:49.768 13:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:49.768 13:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:49.768 13:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:49.768 13:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:49.768 13:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.768 13:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.768 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.768 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.768 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.768 13:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:49.768 "name": "Existed_Raid", 00:22:49.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.768 "strip_size_kb": 64, 00:22:49.768 "state": "configuring", 00:22:49.768 "raid_level": "concat", 00:22:49.768 "superblock": false, 00:22:49.768 "num_base_bdevs": 3, 00:22:49.768 "num_base_bdevs_discovered": 2, 00:22:49.768 "num_base_bdevs_operational": 3, 00:22:49.768 "base_bdevs_list": [ 00:22:49.768 { 00:22:49.768 "name": "BaseBdev1", 00:22:49.768 "uuid": "a89ad1ec-f10f-4d33-8ef2-40104ac1f2d9", 00:22:49.768 "is_configured": true, 00:22:49.768 "data_offset": 0, 00:22:49.768 "data_size": 65536 00:22:49.768 }, 00:22:49.768 { 00:22:49.768 "name": null, 00:22:49.768 "uuid": "04e3e6da-1673-408f-b817-fac17f4e843c", 00:22:49.768 "is_configured": false, 00:22:49.768 "data_offset": 0, 00:22:49.768 "data_size": 65536 00:22:49.768 }, 00:22:49.768 { 00:22:49.768 "name": "BaseBdev3", 00:22:49.768 "uuid": "ec6cd2b2-8940-4649-85de-84bcd6da4374", 00:22:49.768 "is_configured": true, 00:22:49.768 "data_offset": 0, 00:22:49.768 "data_size": 65536 00:22:49.768 } 00:22:49.768 ] 00:22:49.768 }' 00:22:49.768 13:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:49.768 13:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.334 [2024-11-20 13:42:53.155592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.334 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:50.334 "name": "Existed_Raid", 00:22:50.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.334 "strip_size_kb": 64, 00:22:50.334 "state": "configuring", 00:22:50.334 "raid_level": "concat", 00:22:50.334 "superblock": false, 00:22:50.335 "num_base_bdevs": 3, 00:22:50.335 "num_base_bdevs_discovered": 1, 00:22:50.335 "num_base_bdevs_operational": 3, 00:22:50.335 "base_bdevs_list": [ 00:22:50.335 { 00:22:50.335 "name": "BaseBdev1", 00:22:50.335 "uuid": "a89ad1ec-f10f-4d33-8ef2-40104ac1f2d9", 00:22:50.335 "is_configured": true, 00:22:50.335 "data_offset": 0, 00:22:50.335 "data_size": 65536 00:22:50.335 }, 00:22:50.335 { 00:22:50.335 "name": null, 00:22:50.335 "uuid": "04e3e6da-1673-408f-b817-fac17f4e843c", 00:22:50.335 "is_configured": false, 00:22:50.335 "data_offset": 0, 00:22:50.335 "data_size": 65536 00:22:50.335 }, 00:22:50.335 { 00:22:50.335 "name": null, 00:22:50.335 "uuid": "ec6cd2b2-8940-4649-85de-84bcd6da4374", 00:22:50.335 "is_configured": false, 00:22:50.335 "data_offset": 0, 00:22:50.335 "data_size": 65536 00:22:50.335 } 00:22:50.335 ] 00:22:50.335 }' 00:22:50.335 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:50.335 13:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.900 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.900 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:50.900 13:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.900 13:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.900 13:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.900 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:50.900 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:50.900 13:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.900 13:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.900 [2024-11-20 13:42:53.755818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:50.900 13:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.900 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:50.900 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:50.900 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:50.900 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:50.900 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:50.900 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:50.900 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:50.900 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:50.900 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:50.901 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:50.901 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.901 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:50.901 13:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.901 13:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.901 13:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.158 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:51.159 "name": "Existed_Raid", 00:22:51.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.159 "strip_size_kb": 64, 00:22:51.159 "state": "configuring", 00:22:51.159 "raid_level": "concat", 00:22:51.159 "superblock": false, 00:22:51.159 "num_base_bdevs": 3, 00:22:51.159 "num_base_bdevs_discovered": 2, 00:22:51.159 "num_base_bdevs_operational": 3, 00:22:51.159 "base_bdevs_list": [ 00:22:51.159 { 00:22:51.159 "name": "BaseBdev1", 00:22:51.159 "uuid": "a89ad1ec-f10f-4d33-8ef2-40104ac1f2d9", 00:22:51.159 "is_configured": true, 00:22:51.159 "data_offset": 0, 00:22:51.159 "data_size": 65536 00:22:51.159 }, 00:22:51.159 { 00:22:51.159 "name": null, 00:22:51.159 "uuid": "04e3e6da-1673-408f-b817-fac17f4e843c", 00:22:51.159 "is_configured": false, 00:22:51.159 "data_offset": 0, 00:22:51.159 "data_size": 65536 00:22:51.159 }, 00:22:51.159 { 00:22:51.159 "name": "BaseBdev3", 00:22:51.159 "uuid": "ec6cd2b2-8940-4649-85de-84bcd6da4374", 00:22:51.159 "is_configured": true, 00:22:51.159 "data_offset": 0, 00:22:51.159 "data_size": 65536 00:22:51.159 } 00:22:51.159 ] 00:22:51.159 }' 00:22:51.159 13:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:51.159 13:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.424 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.424 13:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.424 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:51.424 13:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.424 13:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.424 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:51.424 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:51.424 13:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.424 13:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.424 [2024-11-20 13:42:54.327999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:51.683 13:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.683 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:51.683 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:51.683 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:51.683 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:51.683 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:51.683 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:51.683 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:51.683 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:51.683 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:51.683 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:51.683 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.683 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:51.683 13:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.683 13:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.683 13:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.683 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:51.683 "name": "Existed_Raid", 00:22:51.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.683 "strip_size_kb": 64, 00:22:51.683 "state": "configuring", 00:22:51.683 "raid_level": "concat", 00:22:51.683 "superblock": false, 00:22:51.683 "num_base_bdevs": 3, 00:22:51.683 "num_base_bdevs_discovered": 1, 00:22:51.683 "num_base_bdevs_operational": 3, 00:22:51.683 "base_bdevs_list": [ 00:22:51.683 { 00:22:51.683 "name": null, 00:22:51.683 "uuid": "a89ad1ec-f10f-4d33-8ef2-40104ac1f2d9", 00:22:51.683 "is_configured": false, 00:22:51.683 "data_offset": 0, 00:22:51.683 "data_size": 65536 00:22:51.683 }, 00:22:51.683 { 00:22:51.683 "name": null, 00:22:51.683 "uuid": "04e3e6da-1673-408f-b817-fac17f4e843c", 00:22:51.683 "is_configured": false, 00:22:51.683 "data_offset": 0, 00:22:51.683 "data_size": 65536 00:22:51.683 }, 00:22:51.683 { 00:22:51.683 "name": "BaseBdev3", 00:22:51.683 "uuid": "ec6cd2b2-8940-4649-85de-84bcd6da4374", 00:22:51.683 "is_configured": true, 00:22:51.683 "data_offset": 0, 00:22:51.683 "data_size": 65536 00:22:51.683 } 00:22:51.683 ] 00:22:51.683 }' 00:22:51.683 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:51.683 13:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.250 [2024-11-20 13:42:54.928877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.250 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:52.250 "name": "Existed_Raid", 00:22:52.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.250 "strip_size_kb": 64, 00:22:52.250 "state": "configuring", 00:22:52.250 "raid_level": "concat", 00:22:52.250 "superblock": false, 00:22:52.250 "num_base_bdevs": 3, 00:22:52.250 "num_base_bdevs_discovered": 2, 00:22:52.250 "num_base_bdevs_operational": 3, 00:22:52.250 "base_bdevs_list": [ 00:22:52.250 { 00:22:52.250 "name": null, 00:22:52.250 "uuid": "a89ad1ec-f10f-4d33-8ef2-40104ac1f2d9", 00:22:52.250 "is_configured": false, 00:22:52.250 "data_offset": 0, 00:22:52.250 "data_size": 65536 00:22:52.250 }, 00:22:52.250 { 00:22:52.250 "name": "BaseBdev2", 00:22:52.250 "uuid": "04e3e6da-1673-408f-b817-fac17f4e843c", 00:22:52.250 "is_configured": true, 00:22:52.250 "data_offset": 0, 00:22:52.250 "data_size": 65536 00:22:52.250 }, 00:22:52.250 { 00:22:52.250 "name": "BaseBdev3", 00:22:52.250 "uuid": "ec6cd2b2-8940-4649-85de-84bcd6da4374", 00:22:52.250 "is_configured": true, 00:22:52.250 "data_offset": 0, 00:22:52.250 "data_size": 65536 00:22:52.250 } 00:22:52.250 ] 00:22:52.251 }' 00:22:52.251 13:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:52.251 13:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.819 13:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.819 13:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:52.819 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.819 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.819 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.819 13:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:52.819 13:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a89ad1ec-f10f-4d33-8ef2-40104ac1f2d9 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.820 [2024-11-20 13:42:55.575495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:52.820 [2024-11-20 13:42:55.575567] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:52.820 [2024-11-20 13:42:55.575583] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:22:52.820 [2024-11-20 13:42:55.575954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:22:52.820 [2024-11-20 13:42:55.576179] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:52.820 [2024-11-20 13:42:55.576196] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:52.820 [2024-11-20 13:42:55.576585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:52.820 NewBaseBdev 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.820 [ 00:22:52.820 { 00:22:52.820 "name": "NewBaseBdev", 00:22:52.820 "aliases": [ 00:22:52.820 "a89ad1ec-f10f-4d33-8ef2-40104ac1f2d9" 00:22:52.820 ], 00:22:52.820 "product_name": "Malloc disk", 00:22:52.820 "block_size": 512, 00:22:52.820 "num_blocks": 65536, 00:22:52.820 "uuid": "a89ad1ec-f10f-4d33-8ef2-40104ac1f2d9", 00:22:52.820 "assigned_rate_limits": { 00:22:52.820 "rw_ios_per_sec": 0, 00:22:52.820 "rw_mbytes_per_sec": 0, 00:22:52.820 "r_mbytes_per_sec": 0, 00:22:52.820 "w_mbytes_per_sec": 0 00:22:52.820 }, 00:22:52.820 "claimed": true, 00:22:52.820 "claim_type": "exclusive_write", 00:22:52.820 "zoned": false, 00:22:52.820 "supported_io_types": { 00:22:52.820 "read": true, 00:22:52.820 "write": true, 00:22:52.820 "unmap": true, 00:22:52.820 "flush": true, 00:22:52.820 "reset": true, 00:22:52.820 "nvme_admin": false, 00:22:52.820 "nvme_io": false, 00:22:52.820 "nvme_io_md": false, 00:22:52.820 "write_zeroes": true, 00:22:52.820 "zcopy": true, 00:22:52.820 "get_zone_info": false, 00:22:52.820 "zone_management": false, 00:22:52.820 "zone_append": false, 00:22:52.820 "compare": false, 00:22:52.820 "compare_and_write": false, 00:22:52.820 "abort": true, 00:22:52.820 "seek_hole": false, 00:22:52.820 "seek_data": false, 00:22:52.820 "copy": true, 00:22:52.820 "nvme_iov_md": false 00:22:52.820 }, 00:22:52.820 "memory_domains": [ 00:22:52.820 { 00:22:52.820 "dma_device_id": "system", 00:22:52.820 "dma_device_type": 1 00:22:52.820 }, 00:22:52.820 { 00:22:52.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.820 "dma_device_type": 2 00:22:52.820 } 00:22:52.820 ], 00:22:52.820 "driver_specific": {} 00:22:52.820 } 00:22:52.820 ] 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:52.820 "name": "Existed_Raid", 00:22:52.820 "uuid": "88fd6ca5-80d0-49bc-8c51-425d0e5d0279", 00:22:52.820 "strip_size_kb": 64, 00:22:52.820 "state": "online", 00:22:52.820 "raid_level": "concat", 00:22:52.820 "superblock": false, 00:22:52.820 "num_base_bdevs": 3, 00:22:52.820 "num_base_bdevs_discovered": 3, 00:22:52.820 "num_base_bdevs_operational": 3, 00:22:52.820 "base_bdevs_list": [ 00:22:52.820 { 00:22:52.820 "name": "NewBaseBdev", 00:22:52.820 "uuid": "a89ad1ec-f10f-4d33-8ef2-40104ac1f2d9", 00:22:52.820 "is_configured": true, 00:22:52.820 "data_offset": 0, 00:22:52.820 "data_size": 65536 00:22:52.820 }, 00:22:52.820 { 00:22:52.820 "name": "BaseBdev2", 00:22:52.820 "uuid": "04e3e6da-1673-408f-b817-fac17f4e843c", 00:22:52.820 "is_configured": true, 00:22:52.820 "data_offset": 0, 00:22:52.820 "data_size": 65536 00:22:52.820 }, 00:22:52.820 { 00:22:52.820 "name": "BaseBdev3", 00:22:52.820 "uuid": "ec6cd2b2-8940-4649-85de-84bcd6da4374", 00:22:52.820 "is_configured": true, 00:22:52.820 "data_offset": 0, 00:22:52.820 "data_size": 65536 00:22:52.820 } 00:22:52.820 ] 00:22:52.820 }' 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:52.820 13:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.389 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:53.389 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:53.389 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:53.389 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:53.389 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:53.389 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:53.389 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:53.389 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:53.389 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.389 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.389 [2024-11-20 13:42:56.120106] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:53.389 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.389 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:53.389 "name": "Existed_Raid", 00:22:53.389 "aliases": [ 00:22:53.389 "88fd6ca5-80d0-49bc-8c51-425d0e5d0279" 00:22:53.389 ], 00:22:53.389 "product_name": "Raid Volume", 00:22:53.389 "block_size": 512, 00:22:53.389 "num_blocks": 196608, 00:22:53.389 "uuid": "88fd6ca5-80d0-49bc-8c51-425d0e5d0279", 00:22:53.389 "assigned_rate_limits": { 00:22:53.389 "rw_ios_per_sec": 0, 00:22:53.389 "rw_mbytes_per_sec": 0, 00:22:53.389 "r_mbytes_per_sec": 0, 00:22:53.389 "w_mbytes_per_sec": 0 00:22:53.389 }, 00:22:53.389 "claimed": false, 00:22:53.389 "zoned": false, 00:22:53.389 "supported_io_types": { 00:22:53.389 "read": true, 00:22:53.389 "write": true, 00:22:53.389 "unmap": true, 00:22:53.389 "flush": true, 00:22:53.389 "reset": true, 00:22:53.389 "nvme_admin": false, 00:22:53.389 "nvme_io": false, 00:22:53.389 "nvme_io_md": false, 00:22:53.389 "write_zeroes": true, 00:22:53.389 "zcopy": false, 00:22:53.389 "get_zone_info": false, 00:22:53.389 "zone_management": false, 00:22:53.389 "zone_append": false, 00:22:53.389 "compare": false, 00:22:53.389 "compare_and_write": false, 00:22:53.389 "abort": false, 00:22:53.389 "seek_hole": false, 00:22:53.389 "seek_data": false, 00:22:53.389 "copy": false, 00:22:53.389 "nvme_iov_md": false 00:22:53.389 }, 00:22:53.389 "memory_domains": [ 00:22:53.389 { 00:22:53.389 "dma_device_id": "system", 00:22:53.389 "dma_device_type": 1 00:22:53.389 }, 00:22:53.389 { 00:22:53.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.389 "dma_device_type": 2 00:22:53.389 }, 00:22:53.389 { 00:22:53.389 "dma_device_id": "system", 00:22:53.389 "dma_device_type": 1 00:22:53.389 }, 00:22:53.389 { 00:22:53.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.389 "dma_device_type": 2 00:22:53.389 }, 00:22:53.389 { 00:22:53.389 "dma_device_id": "system", 00:22:53.389 "dma_device_type": 1 00:22:53.389 }, 00:22:53.389 { 00:22:53.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.389 "dma_device_type": 2 00:22:53.389 } 00:22:53.389 ], 00:22:53.389 "driver_specific": { 00:22:53.389 "raid": { 00:22:53.389 "uuid": "88fd6ca5-80d0-49bc-8c51-425d0e5d0279", 00:22:53.389 "strip_size_kb": 64, 00:22:53.389 "state": "online", 00:22:53.389 "raid_level": "concat", 00:22:53.389 "superblock": false, 00:22:53.389 "num_base_bdevs": 3, 00:22:53.389 "num_base_bdevs_discovered": 3, 00:22:53.389 "num_base_bdevs_operational": 3, 00:22:53.389 "base_bdevs_list": [ 00:22:53.389 { 00:22:53.389 "name": "NewBaseBdev", 00:22:53.389 "uuid": "a89ad1ec-f10f-4d33-8ef2-40104ac1f2d9", 00:22:53.389 "is_configured": true, 00:22:53.389 "data_offset": 0, 00:22:53.389 "data_size": 65536 00:22:53.389 }, 00:22:53.389 { 00:22:53.389 "name": "BaseBdev2", 00:22:53.389 "uuid": "04e3e6da-1673-408f-b817-fac17f4e843c", 00:22:53.389 "is_configured": true, 00:22:53.389 "data_offset": 0, 00:22:53.389 "data_size": 65536 00:22:53.389 }, 00:22:53.389 { 00:22:53.389 "name": "BaseBdev3", 00:22:53.389 "uuid": "ec6cd2b2-8940-4649-85de-84bcd6da4374", 00:22:53.389 "is_configured": true, 00:22:53.389 "data_offset": 0, 00:22:53.389 "data_size": 65536 00:22:53.389 } 00:22:53.389 ] 00:22:53.389 } 00:22:53.389 } 00:22:53.389 }' 00:22:53.389 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:53.389 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:53.389 BaseBdev2 00:22:53.389 BaseBdev3' 00:22:53.389 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:53.389 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:53.389 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:53.389 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:53.389 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:53.389 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.389 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.389 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:53.648 [2024-11-20 13:42:56.423816] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:53.648 [2024-11-20 13:42:56.423859] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:53.648 [2024-11-20 13:42:56.423998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:53.648 [2024-11-20 13:42:56.424093] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:53.648 [2024-11-20 13:42:56.424114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65808 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65808 ']' 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65808 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:22:53.648 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:53.649 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65808 00:22:53.649 killing process with pid 65808 00:22:53.649 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:53.649 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:53.649 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65808' 00:22:53.649 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65808 00:22:53.649 [2024-11-20 13:42:56.454968] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:53.649 13:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65808 00:22:53.907 [2024-11-20 13:42:56.735083] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:22:55.284 00:22:55.284 real 0m11.858s 00:22:55.284 user 0m19.493s 00:22:55.284 sys 0m1.542s 00:22:55.284 ************************************ 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:55.284 END TEST raid_state_function_test 00:22:55.284 ************************************ 00:22:55.284 13:42:57 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:22:55.284 13:42:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:55.284 13:42:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:55.284 13:42:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:55.284 ************************************ 00:22:55.284 START TEST raid_state_function_test_sb 00:22:55.284 ************************************ 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66440 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66440' 00:22:55.284 Process raid pid: 66440 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66440 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66440 ']' 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.284 13:42:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.284 [2024-11-20 13:42:57.950510] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:22:55.284 [2024-11-20 13:42:57.950682] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.284 [2024-11-20 13:42:58.120869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.542 [2024-11-20 13:42:58.255983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.799 [2024-11-20 13:42:58.463865] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:55.799 [2024-11-20 13:42:58.463921] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:56.090 13:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.090 13:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:22:56.090 13:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:56.090 13:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.090 13:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.090 [2024-11-20 13:42:58.993366] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:56.090 [2024-11-20 13:42:58.993432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:56.090 [2024-11-20 13:42:58.993450] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:56.090 [2024-11-20 13:42:58.993467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:56.090 [2024-11-20 13:42:58.993477] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:56.090 [2024-11-20 13:42:58.993491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:56.090 13:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.090 13:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:56.090 13:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:56.090 13:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:56.090 13:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:56.090 13:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:56.090 13:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:56.090 13:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:56.090 13:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:56.090 13:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:56.090 13:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:56.090 13:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:56.090 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.090 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.090 13:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:56.349 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.349 13:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:56.349 "name": "Existed_Raid", 00:22:56.349 "uuid": "f582e097-2384-488c-afc1-e82cfd664a5f", 00:22:56.349 "strip_size_kb": 64, 00:22:56.349 "state": "configuring", 00:22:56.349 "raid_level": "concat", 00:22:56.349 "superblock": true, 00:22:56.349 "num_base_bdevs": 3, 00:22:56.349 "num_base_bdevs_discovered": 0, 00:22:56.349 "num_base_bdevs_operational": 3, 00:22:56.349 "base_bdevs_list": [ 00:22:56.349 { 00:22:56.349 "name": "BaseBdev1", 00:22:56.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.349 "is_configured": false, 00:22:56.349 "data_offset": 0, 00:22:56.349 "data_size": 0 00:22:56.349 }, 00:22:56.349 { 00:22:56.349 "name": "BaseBdev2", 00:22:56.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.349 "is_configured": false, 00:22:56.349 "data_offset": 0, 00:22:56.349 "data_size": 0 00:22:56.349 }, 00:22:56.349 { 00:22:56.349 "name": "BaseBdev3", 00:22:56.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.349 "is_configured": false, 00:22:56.349 "data_offset": 0, 00:22:56.349 "data_size": 0 00:22:56.349 } 00:22:56.349 ] 00:22:56.349 }' 00:22:56.349 13:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:56.349 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.606 13:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:56.606 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.606 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.606 [2024-11-20 13:42:59.485425] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:56.606 [2024-11-20 13:42:59.485477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:56.606 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.606 13:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:56.607 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.607 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.607 [2024-11-20 13:42:59.493419] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:56.607 [2024-11-20 13:42:59.493479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:56.607 [2024-11-20 13:42:59.493494] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:56.607 [2024-11-20 13:42:59.493511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:56.607 [2024-11-20 13:42:59.493520] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:56.607 [2024-11-20 13:42:59.493534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:56.607 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.607 13:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:56.607 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.607 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.865 [2024-11-20 13:42:59.538875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:56.865 BaseBdev1 00:22:56.865 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.865 13:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:56.865 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.866 [ 00:22:56.866 { 00:22:56.866 "name": "BaseBdev1", 00:22:56.866 "aliases": [ 00:22:56.866 "3dad6a38-6b41-4330-9f47-0768e3c3e8e9" 00:22:56.866 ], 00:22:56.866 "product_name": "Malloc disk", 00:22:56.866 "block_size": 512, 00:22:56.866 "num_blocks": 65536, 00:22:56.866 "uuid": "3dad6a38-6b41-4330-9f47-0768e3c3e8e9", 00:22:56.866 "assigned_rate_limits": { 00:22:56.866 "rw_ios_per_sec": 0, 00:22:56.866 "rw_mbytes_per_sec": 0, 00:22:56.866 "r_mbytes_per_sec": 0, 00:22:56.866 "w_mbytes_per_sec": 0 00:22:56.866 }, 00:22:56.866 "claimed": true, 00:22:56.866 "claim_type": "exclusive_write", 00:22:56.866 "zoned": false, 00:22:56.866 "supported_io_types": { 00:22:56.866 "read": true, 00:22:56.866 "write": true, 00:22:56.866 "unmap": true, 00:22:56.866 "flush": true, 00:22:56.866 "reset": true, 00:22:56.866 "nvme_admin": false, 00:22:56.866 "nvme_io": false, 00:22:56.866 "nvme_io_md": false, 00:22:56.866 "write_zeroes": true, 00:22:56.866 "zcopy": true, 00:22:56.866 "get_zone_info": false, 00:22:56.866 "zone_management": false, 00:22:56.866 "zone_append": false, 00:22:56.866 "compare": false, 00:22:56.866 "compare_and_write": false, 00:22:56.866 "abort": true, 00:22:56.866 "seek_hole": false, 00:22:56.866 "seek_data": false, 00:22:56.866 "copy": true, 00:22:56.866 "nvme_iov_md": false 00:22:56.866 }, 00:22:56.866 "memory_domains": [ 00:22:56.866 { 00:22:56.866 "dma_device_id": "system", 00:22:56.866 "dma_device_type": 1 00:22:56.866 }, 00:22:56.866 { 00:22:56.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.866 "dma_device_type": 2 00:22:56.866 } 00:22:56.866 ], 00:22:56.866 "driver_specific": {} 00:22:56.866 } 00:22:56.866 ] 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:56.866 "name": "Existed_Raid", 00:22:56.866 "uuid": "40858898-14e5-4850-a0aa-48839a82abab", 00:22:56.866 "strip_size_kb": 64, 00:22:56.866 "state": "configuring", 00:22:56.866 "raid_level": "concat", 00:22:56.866 "superblock": true, 00:22:56.866 "num_base_bdevs": 3, 00:22:56.866 "num_base_bdevs_discovered": 1, 00:22:56.866 "num_base_bdevs_operational": 3, 00:22:56.866 "base_bdevs_list": [ 00:22:56.866 { 00:22:56.866 "name": "BaseBdev1", 00:22:56.866 "uuid": "3dad6a38-6b41-4330-9f47-0768e3c3e8e9", 00:22:56.866 "is_configured": true, 00:22:56.866 "data_offset": 2048, 00:22:56.866 "data_size": 63488 00:22:56.866 }, 00:22:56.866 { 00:22:56.866 "name": "BaseBdev2", 00:22:56.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.866 "is_configured": false, 00:22:56.866 "data_offset": 0, 00:22:56.866 "data_size": 0 00:22:56.866 }, 00:22:56.866 { 00:22:56.866 "name": "BaseBdev3", 00:22:56.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.866 "is_configured": false, 00:22:56.866 "data_offset": 0, 00:22:56.866 "data_size": 0 00:22:56.866 } 00:22:56.866 ] 00:22:56.866 }' 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:56.866 13:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.435 [2024-11-20 13:43:00.075121] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:57.435 [2024-11-20 13:43:00.075194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.435 [2024-11-20 13:43:00.083281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:57.435 [2024-11-20 13:43:00.085849] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:57.435 [2024-11-20 13:43:00.085921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:57.435 [2024-11-20 13:43:00.085940] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:57.435 [2024-11-20 13:43:00.085957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.435 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:57.435 "name": "Existed_Raid", 00:22:57.435 "uuid": "30e55791-f54c-4299-8ed7-8dae6164fa0f", 00:22:57.435 "strip_size_kb": 64, 00:22:57.435 "state": "configuring", 00:22:57.435 "raid_level": "concat", 00:22:57.435 "superblock": true, 00:22:57.435 "num_base_bdevs": 3, 00:22:57.436 "num_base_bdevs_discovered": 1, 00:22:57.436 "num_base_bdevs_operational": 3, 00:22:57.436 "base_bdevs_list": [ 00:22:57.436 { 00:22:57.436 "name": "BaseBdev1", 00:22:57.436 "uuid": "3dad6a38-6b41-4330-9f47-0768e3c3e8e9", 00:22:57.436 "is_configured": true, 00:22:57.436 "data_offset": 2048, 00:22:57.436 "data_size": 63488 00:22:57.436 }, 00:22:57.436 { 00:22:57.436 "name": "BaseBdev2", 00:22:57.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.436 "is_configured": false, 00:22:57.436 "data_offset": 0, 00:22:57.436 "data_size": 0 00:22:57.436 }, 00:22:57.436 { 00:22:57.436 "name": "BaseBdev3", 00:22:57.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.436 "is_configured": false, 00:22:57.436 "data_offset": 0, 00:22:57.436 "data_size": 0 00:22:57.436 } 00:22:57.436 ] 00:22:57.436 }' 00:22:57.436 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:57.436 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.004 [2024-11-20 13:43:00.666646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:58.004 BaseBdev2 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.004 [ 00:22:58.004 { 00:22:58.004 "name": "BaseBdev2", 00:22:58.004 "aliases": [ 00:22:58.004 "a6182749-00c6-43d3-bb70-0c60e1360022" 00:22:58.004 ], 00:22:58.004 "product_name": "Malloc disk", 00:22:58.004 "block_size": 512, 00:22:58.004 "num_blocks": 65536, 00:22:58.004 "uuid": "a6182749-00c6-43d3-bb70-0c60e1360022", 00:22:58.004 "assigned_rate_limits": { 00:22:58.004 "rw_ios_per_sec": 0, 00:22:58.004 "rw_mbytes_per_sec": 0, 00:22:58.004 "r_mbytes_per_sec": 0, 00:22:58.004 "w_mbytes_per_sec": 0 00:22:58.004 }, 00:22:58.004 "claimed": true, 00:22:58.004 "claim_type": "exclusive_write", 00:22:58.004 "zoned": false, 00:22:58.004 "supported_io_types": { 00:22:58.004 "read": true, 00:22:58.004 "write": true, 00:22:58.004 "unmap": true, 00:22:58.004 "flush": true, 00:22:58.004 "reset": true, 00:22:58.004 "nvme_admin": false, 00:22:58.004 "nvme_io": false, 00:22:58.004 "nvme_io_md": false, 00:22:58.004 "write_zeroes": true, 00:22:58.004 "zcopy": true, 00:22:58.004 "get_zone_info": false, 00:22:58.004 "zone_management": false, 00:22:58.004 "zone_append": false, 00:22:58.004 "compare": false, 00:22:58.004 "compare_and_write": false, 00:22:58.004 "abort": true, 00:22:58.004 "seek_hole": false, 00:22:58.004 "seek_data": false, 00:22:58.004 "copy": true, 00:22:58.004 "nvme_iov_md": false 00:22:58.004 }, 00:22:58.004 "memory_domains": [ 00:22:58.004 { 00:22:58.004 "dma_device_id": "system", 00:22:58.004 "dma_device_type": 1 00:22:58.004 }, 00:22:58.004 { 00:22:58.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.004 "dma_device_type": 2 00:22:58.004 } 00:22:58.004 ], 00:22:58.004 "driver_specific": {} 00:22:58.004 } 00:22:58.004 ] 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:58.004 "name": "Existed_Raid", 00:22:58.004 "uuid": "30e55791-f54c-4299-8ed7-8dae6164fa0f", 00:22:58.004 "strip_size_kb": 64, 00:22:58.004 "state": "configuring", 00:22:58.004 "raid_level": "concat", 00:22:58.004 "superblock": true, 00:22:58.004 "num_base_bdevs": 3, 00:22:58.004 "num_base_bdevs_discovered": 2, 00:22:58.004 "num_base_bdevs_operational": 3, 00:22:58.004 "base_bdevs_list": [ 00:22:58.004 { 00:22:58.004 "name": "BaseBdev1", 00:22:58.004 "uuid": "3dad6a38-6b41-4330-9f47-0768e3c3e8e9", 00:22:58.004 "is_configured": true, 00:22:58.004 "data_offset": 2048, 00:22:58.004 "data_size": 63488 00:22:58.004 }, 00:22:58.004 { 00:22:58.004 "name": "BaseBdev2", 00:22:58.004 "uuid": "a6182749-00c6-43d3-bb70-0c60e1360022", 00:22:58.004 "is_configured": true, 00:22:58.004 "data_offset": 2048, 00:22:58.004 "data_size": 63488 00:22:58.004 }, 00:22:58.004 { 00:22:58.004 "name": "BaseBdev3", 00:22:58.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.004 "is_configured": false, 00:22:58.004 "data_offset": 0, 00:22:58.004 "data_size": 0 00:22:58.004 } 00:22:58.004 ] 00:22:58.004 }' 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:58.004 13:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.573 [2024-11-20 13:43:01.251346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:58.573 [2024-11-20 13:43:01.251681] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:58.573 [2024-11-20 13:43:01.251716] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:58.573 [2024-11-20 13:43:01.252074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:58.573 BaseBdev3 00:22:58.573 [2024-11-20 13:43:01.252308] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:58.573 [2024-11-20 13:43:01.252326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:58.573 [2024-11-20 13:43:01.252509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.573 [ 00:22:58.573 { 00:22:58.573 "name": "BaseBdev3", 00:22:58.573 "aliases": [ 00:22:58.573 "8dc2623c-0b8c-4859-8afb-adc27df580ad" 00:22:58.573 ], 00:22:58.573 "product_name": "Malloc disk", 00:22:58.573 "block_size": 512, 00:22:58.573 "num_blocks": 65536, 00:22:58.573 "uuid": "8dc2623c-0b8c-4859-8afb-adc27df580ad", 00:22:58.573 "assigned_rate_limits": { 00:22:58.573 "rw_ios_per_sec": 0, 00:22:58.573 "rw_mbytes_per_sec": 0, 00:22:58.573 "r_mbytes_per_sec": 0, 00:22:58.573 "w_mbytes_per_sec": 0 00:22:58.573 }, 00:22:58.573 "claimed": true, 00:22:58.573 "claim_type": "exclusive_write", 00:22:58.573 "zoned": false, 00:22:58.573 "supported_io_types": { 00:22:58.573 "read": true, 00:22:58.573 "write": true, 00:22:58.573 "unmap": true, 00:22:58.573 "flush": true, 00:22:58.573 "reset": true, 00:22:58.573 "nvme_admin": false, 00:22:58.573 "nvme_io": false, 00:22:58.573 "nvme_io_md": false, 00:22:58.573 "write_zeroes": true, 00:22:58.573 "zcopy": true, 00:22:58.573 "get_zone_info": false, 00:22:58.573 "zone_management": false, 00:22:58.573 "zone_append": false, 00:22:58.573 "compare": false, 00:22:58.573 "compare_and_write": false, 00:22:58.573 "abort": true, 00:22:58.573 "seek_hole": false, 00:22:58.573 "seek_data": false, 00:22:58.573 "copy": true, 00:22:58.573 "nvme_iov_md": false 00:22:58.573 }, 00:22:58.573 "memory_domains": [ 00:22:58.573 { 00:22:58.573 "dma_device_id": "system", 00:22:58.573 "dma_device_type": 1 00:22:58.573 }, 00:22:58.573 { 00:22:58.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.573 "dma_device_type": 2 00:22:58.573 } 00:22:58.573 ], 00:22:58.573 "driver_specific": {} 00:22:58.573 } 00:22:58.573 ] 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:58.573 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:58.574 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.574 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:58.574 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.574 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.574 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.574 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:58.574 "name": "Existed_Raid", 00:22:58.574 "uuid": "30e55791-f54c-4299-8ed7-8dae6164fa0f", 00:22:58.574 "strip_size_kb": 64, 00:22:58.574 "state": "online", 00:22:58.574 "raid_level": "concat", 00:22:58.574 "superblock": true, 00:22:58.574 "num_base_bdevs": 3, 00:22:58.574 "num_base_bdevs_discovered": 3, 00:22:58.574 "num_base_bdevs_operational": 3, 00:22:58.574 "base_bdevs_list": [ 00:22:58.574 { 00:22:58.574 "name": "BaseBdev1", 00:22:58.574 "uuid": "3dad6a38-6b41-4330-9f47-0768e3c3e8e9", 00:22:58.574 "is_configured": true, 00:22:58.574 "data_offset": 2048, 00:22:58.574 "data_size": 63488 00:22:58.574 }, 00:22:58.574 { 00:22:58.574 "name": "BaseBdev2", 00:22:58.574 "uuid": "a6182749-00c6-43d3-bb70-0c60e1360022", 00:22:58.574 "is_configured": true, 00:22:58.574 "data_offset": 2048, 00:22:58.574 "data_size": 63488 00:22:58.574 }, 00:22:58.574 { 00:22:58.574 "name": "BaseBdev3", 00:22:58.574 "uuid": "8dc2623c-0b8c-4859-8afb-adc27df580ad", 00:22:58.574 "is_configured": true, 00:22:58.574 "data_offset": 2048, 00:22:58.574 "data_size": 63488 00:22:58.574 } 00:22:58.574 ] 00:22:58.574 }' 00:22:58.574 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:58.574 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.140 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:59.141 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:59.141 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:59.141 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:59.141 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:59.141 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:59.141 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:59.141 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:59.141 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.141 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.141 [2024-11-20 13:43:01.848011] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:59.141 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.141 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:59.141 "name": "Existed_Raid", 00:22:59.141 "aliases": [ 00:22:59.141 "30e55791-f54c-4299-8ed7-8dae6164fa0f" 00:22:59.141 ], 00:22:59.141 "product_name": "Raid Volume", 00:22:59.141 "block_size": 512, 00:22:59.141 "num_blocks": 190464, 00:22:59.141 "uuid": "30e55791-f54c-4299-8ed7-8dae6164fa0f", 00:22:59.141 "assigned_rate_limits": { 00:22:59.141 "rw_ios_per_sec": 0, 00:22:59.141 "rw_mbytes_per_sec": 0, 00:22:59.141 "r_mbytes_per_sec": 0, 00:22:59.141 "w_mbytes_per_sec": 0 00:22:59.141 }, 00:22:59.141 "claimed": false, 00:22:59.141 "zoned": false, 00:22:59.141 "supported_io_types": { 00:22:59.141 "read": true, 00:22:59.141 "write": true, 00:22:59.141 "unmap": true, 00:22:59.141 "flush": true, 00:22:59.141 "reset": true, 00:22:59.141 "nvme_admin": false, 00:22:59.141 "nvme_io": false, 00:22:59.141 "nvme_io_md": false, 00:22:59.141 "write_zeroes": true, 00:22:59.141 "zcopy": false, 00:22:59.141 "get_zone_info": false, 00:22:59.141 "zone_management": false, 00:22:59.141 "zone_append": false, 00:22:59.141 "compare": false, 00:22:59.141 "compare_and_write": false, 00:22:59.141 "abort": false, 00:22:59.141 "seek_hole": false, 00:22:59.141 "seek_data": false, 00:22:59.141 "copy": false, 00:22:59.141 "nvme_iov_md": false 00:22:59.141 }, 00:22:59.141 "memory_domains": [ 00:22:59.141 { 00:22:59.141 "dma_device_id": "system", 00:22:59.141 "dma_device_type": 1 00:22:59.141 }, 00:22:59.141 { 00:22:59.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:59.141 "dma_device_type": 2 00:22:59.141 }, 00:22:59.141 { 00:22:59.141 "dma_device_id": "system", 00:22:59.141 "dma_device_type": 1 00:22:59.141 }, 00:22:59.141 { 00:22:59.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:59.141 "dma_device_type": 2 00:22:59.141 }, 00:22:59.141 { 00:22:59.141 "dma_device_id": "system", 00:22:59.141 "dma_device_type": 1 00:22:59.141 }, 00:22:59.141 { 00:22:59.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:59.141 "dma_device_type": 2 00:22:59.141 } 00:22:59.141 ], 00:22:59.141 "driver_specific": { 00:22:59.141 "raid": { 00:22:59.141 "uuid": "30e55791-f54c-4299-8ed7-8dae6164fa0f", 00:22:59.141 "strip_size_kb": 64, 00:22:59.141 "state": "online", 00:22:59.141 "raid_level": "concat", 00:22:59.141 "superblock": true, 00:22:59.141 "num_base_bdevs": 3, 00:22:59.141 "num_base_bdevs_discovered": 3, 00:22:59.141 "num_base_bdevs_operational": 3, 00:22:59.141 "base_bdevs_list": [ 00:22:59.141 { 00:22:59.141 "name": "BaseBdev1", 00:22:59.141 "uuid": "3dad6a38-6b41-4330-9f47-0768e3c3e8e9", 00:22:59.141 "is_configured": true, 00:22:59.141 "data_offset": 2048, 00:22:59.141 "data_size": 63488 00:22:59.141 }, 00:22:59.141 { 00:22:59.141 "name": "BaseBdev2", 00:22:59.141 "uuid": "a6182749-00c6-43d3-bb70-0c60e1360022", 00:22:59.141 "is_configured": true, 00:22:59.141 "data_offset": 2048, 00:22:59.141 "data_size": 63488 00:22:59.141 }, 00:22:59.141 { 00:22:59.141 "name": "BaseBdev3", 00:22:59.141 "uuid": "8dc2623c-0b8c-4859-8afb-adc27df580ad", 00:22:59.141 "is_configured": true, 00:22:59.141 "data_offset": 2048, 00:22:59.141 "data_size": 63488 00:22:59.141 } 00:22:59.141 ] 00:22:59.141 } 00:22:59.141 } 00:22:59.141 }' 00:22:59.141 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:59.141 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:59.141 BaseBdev2 00:22:59.141 BaseBdev3' 00:22:59.141 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:59.141 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:59.141 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:59.141 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:59.141 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.141 13:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.141 13:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:59.141 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.141 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:59.141 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:59.141 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:59.400 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:59.400 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:59.400 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.400 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.400 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.400 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:59.400 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.401 [2024-11-20 13:43:02.163780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:59.401 [2024-11-20 13:43:02.163826] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:59.401 [2024-11-20 13:43:02.163934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.401 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.659 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:59.659 "name": "Existed_Raid", 00:22:59.659 "uuid": "30e55791-f54c-4299-8ed7-8dae6164fa0f", 00:22:59.659 "strip_size_kb": 64, 00:22:59.659 "state": "offline", 00:22:59.659 "raid_level": "concat", 00:22:59.659 "superblock": true, 00:22:59.659 "num_base_bdevs": 3, 00:22:59.659 "num_base_bdevs_discovered": 2, 00:22:59.659 "num_base_bdevs_operational": 2, 00:22:59.659 "base_bdevs_list": [ 00:22:59.659 { 00:22:59.659 "name": null, 00:22:59.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.659 "is_configured": false, 00:22:59.659 "data_offset": 0, 00:22:59.659 "data_size": 63488 00:22:59.659 }, 00:22:59.659 { 00:22:59.660 "name": "BaseBdev2", 00:22:59.660 "uuid": "a6182749-00c6-43d3-bb70-0c60e1360022", 00:22:59.660 "is_configured": true, 00:22:59.660 "data_offset": 2048, 00:22:59.660 "data_size": 63488 00:22:59.660 }, 00:22:59.660 { 00:22:59.660 "name": "BaseBdev3", 00:22:59.660 "uuid": "8dc2623c-0b8c-4859-8afb-adc27df580ad", 00:22:59.660 "is_configured": true, 00:22:59.660 "data_offset": 2048, 00:22:59.660 "data_size": 63488 00:22:59.660 } 00:22:59.660 ] 00:22:59.660 }' 00:22:59.660 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:59.660 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.917 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:59.917 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:59.917 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.917 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.917 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:59.917 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.917 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.917 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:59.917 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:59.917 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:59.917 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.917 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.917 [2024-11-20 13:43:02.807849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:00.238 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.238 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:00.239 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:00.239 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:00.239 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.239 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:00.239 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:00.239 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.239 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:00.239 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:00.239 13:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:23:00.239 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.239 13:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:00.239 [2024-11-20 13:43:02.967429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:00.239 [2024-11-20 13:43:02.967521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:00.239 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.239 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:00.239 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:00.239 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:00.239 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:00.239 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.239 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:00.239 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.239 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:00.239 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:00.239 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:23:00.239 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:23:00.239 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:00.239 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:00.239 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.239 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:00.498 BaseBdev2 00:23:00.498 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.498 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:23:00.498 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:00.498 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:00.498 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:00.498 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:00.498 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:00.498 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:00.498 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.498 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:00.498 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.498 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:00.498 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.498 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:00.498 [ 00:23:00.498 { 00:23:00.498 "name": "BaseBdev2", 00:23:00.498 "aliases": [ 00:23:00.498 "1e37abe0-cf94-4586-8505-c5b50e1415d4" 00:23:00.498 ], 00:23:00.498 "product_name": "Malloc disk", 00:23:00.498 "block_size": 512, 00:23:00.498 "num_blocks": 65536, 00:23:00.498 "uuid": "1e37abe0-cf94-4586-8505-c5b50e1415d4", 00:23:00.498 "assigned_rate_limits": { 00:23:00.498 "rw_ios_per_sec": 0, 00:23:00.498 "rw_mbytes_per_sec": 0, 00:23:00.498 "r_mbytes_per_sec": 0, 00:23:00.498 "w_mbytes_per_sec": 0 00:23:00.498 }, 00:23:00.498 "claimed": false, 00:23:00.498 "zoned": false, 00:23:00.498 "supported_io_types": { 00:23:00.498 "read": true, 00:23:00.498 "write": true, 00:23:00.498 "unmap": true, 00:23:00.498 "flush": true, 00:23:00.498 "reset": true, 00:23:00.498 "nvme_admin": false, 00:23:00.498 "nvme_io": false, 00:23:00.498 "nvme_io_md": false, 00:23:00.498 "write_zeroes": true, 00:23:00.498 "zcopy": true, 00:23:00.498 "get_zone_info": false, 00:23:00.498 "zone_management": false, 00:23:00.498 "zone_append": false, 00:23:00.498 "compare": false, 00:23:00.498 "compare_and_write": false, 00:23:00.498 "abort": true, 00:23:00.498 "seek_hole": false, 00:23:00.498 "seek_data": false, 00:23:00.498 "copy": true, 00:23:00.498 "nvme_iov_md": false 00:23:00.498 }, 00:23:00.498 "memory_domains": [ 00:23:00.498 { 00:23:00.498 "dma_device_id": "system", 00:23:00.498 "dma_device_type": 1 00:23:00.498 }, 00:23:00.498 { 00:23:00.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.498 "dma_device_type": 2 00:23:00.498 } 00:23:00.498 ], 00:23:00.498 "driver_specific": {} 00:23:00.498 } 00:23:00.498 ] 00:23:00.498 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.498 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:00.498 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:00.499 BaseBdev3 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:00.499 [ 00:23:00.499 { 00:23:00.499 "name": "BaseBdev3", 00:23:00.499 "aliases": [ 00:23:00.499 "5a94bc75-4204-48be-9af9-6da60b033682" 00:23:00.499 ], 00:23:00.499 "product_name": "Malloc disk", 00:23:00.499 "block_size": 512, 00:23:00.499 "num_blocks": 65536, 00:23:00.499 "uuid": "5a94bc75-4204-48be-9af9-6da60b033682", 00:23:00.499 "assigned_rate_limits": { 00:23:00.499 "rw_ios_per_sec": 0, 00:23:00.499 "rw_mbytes_per_sec": 0, 00:23:00.499 "r_mbytes_per_sec": 0, 00:23:00.499 "w_mbytes_per_sec": 0 00:23:00.499 }, 00:23:00.499 "claimed": false, 00:23:00.499 "zoned": false, 00:23:00.499 "supported_io_types": { 00:23:00.499 "read": true, 00:23:00.499 "write": true, 00:23:00.499 "unmap": true, 00:23:00.499 "flush": true, 00:23:00.499 "reset": true, 00:23:00.499 "nvme_admin": false, 00:23:00.499 "nvme_io": false, 00:23:00.499 "nvme_io_md": false, 00:23:00.499 "write_zeroes": true, 00:23:00.499 "zcopy": true, 00:23:00.499 "get_zone_info": false, 00:23:00.499 "zone_management": false, 00:23:00.499 "zone_append": false, 00:23:00.499 "compare": false, 00:23:00.499 "compare_and_write": false, 00:23:00.499 "abort": true, 00:23:00.499 "seek_hole": false, 00:23:00.499 "seek_data": false, 00:23:00.499 "copy": true, 00:23:00.499 "nvme_iov_md": false 00:23:00.499 }, 00:23:00.499 "memory_domains": [ 00:23:00.499 { 00:23:00.499 "dma_device_id": "system", 00:23:00.499 "dma_device_type": 1 00:23:00.499 }, 00:23:00.499 { 00:23:00.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.499 "dma_device_type": 2 00:23:00.499 } 00:23:00.499 ], 00:23:00.499 "driver_specific": {} 00:23:00.499 } 00:23:00.499 ] 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:00.499 [2024-11-20 13:43:03.267108] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:00.499 [2024-11-20 13:43:03.267177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:00.499 [2024-11-20 13:43:03.267217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:00.499 [2024-11-20 13:43:03.270088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:00.499 "name": "Existed_Raid", 00:23:00.499 "uuid": "411c27c7-f16f-4222-97d0-2a5894efb503", 00:23:00.499 "strip_size_kb": 64, 00:23:00.499 "state": "configuring", 00:23:00.499 "raid_level": "concat", 00:23:00.499 "superblock": true, 00:23:00.499 "num_base_bdevs": 3, 00:23:00.499 "num_base_bdevs_discovered": 2, 00:23:00.499 "num_base_bdevs_operational": 3, 00:23:00.499 "base_bdevs_list": [ 00:23:00.499 { 00:23:00.499 "name": "BaseBdev1", 00:23:00.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.499 "is_configured": false, 00:23:00.499 "data_offset": 0, 00:23:00.499 "data_size": 0 00:23:00.499 }, 00:23:00.499 { 00:23:00.499 "name": "BaseBdev2", 00:23:00.499 "uuid": "1e37abe0-cf94-4586-8505-c5b50e1415d4", 00:23:00.499 "is_configured": true, 00:23:00.499 "data_offset": 2048, 00:23:00.499 "data_size": 63488 00:23:00.499 }, 00:23:00.499 { 00:23:00.499 "name": "BaseBdev3", 00:23:00.499 "uuid": "5a94bc75-4204-48be-9af9-6da60b033682", 00:23:00.499 "is_configured": true, 00:23:00.499 "data_offset": 2048, 00:23:00.499 "data_size": 63488 00:23:00.499 } 00:23:00.499 ] 00:23:00.499 }' 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:00.499 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.064 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:23:01.064 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.065 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.065 [2024-11-20 13:43:03.823230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:01.065 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.065 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:23:01.065 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:01.065 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:01.065 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:01.065 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:01.065 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:01.065 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:01.065 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:01.065 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:01.065 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:01.065 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.065 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:01.065 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.065 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.065 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.065 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:01.065 "name": "Existed_Raid", 00:23:01.065 "uuid": "411c27c7-f16f-4222-97d0-2a5894efb503", 00:23:01.065 "strip_size_kb": 64, 00:23:01.065 "state": "configuring", 00:23:01.065 "raid_level": "concat", 00:23:01.065 "superblock": true, 00:23:01.065 "num_base_bdevs": 3, 00:23:01.065 "num_base_bdevs_discovered": 1, 00:23:01.065 "num_base_bdevs_operational": 3, 00:23:01.065 "base_bdevs_list": [ 00:23:01.065 { 00:23:01.065 "name": "BaseBdev1", 00:23:01.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.065 "is_configured": false, 00:23:01.065 "data_offset": 0, 00:23:01.065 "data_size": 0 00:23:01.065 }, 00:23:01.065 { 00:23:01.065 "name": null, 00:23:01.065 "uuid": "1e37abe0-cf94-4586-8505-c5b50e1415d4", 00:23:01.065 "is_configured": false, 00:23:01.065 "data_offset": 0, 00:23:01.065 "data_size": 63488 00:23:01.065 }, 00:23:01.065 { 00:23:01.065 "name": "BaseBdev3", 00:23:01.065 "uuid": "5a94bc75-4204-48be-9af9-6da60b033682", 00:23:01.065 "is_configured": true, 00:23:01.065 "data_offset": 2048, 00:23:01.065 "data_size": 63488 00:23:01.065 } 00:23:01.065 ] 00:23:01.065 }' 00:23:01.065 13:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:01.065 13:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.629 [2024-11-20 13:43:04.477345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:01.629 BaseBdev1 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.629 [ 00:23:01.629 { 00:23:01.629 "name": "BaseBdev1", 00:23:01.629 "aliases": [ 00:23:01.629 "677928d6-9257-4a34-b264-45822535289d" 00:23:01.629 ], 00:23:01.629 "product_name": "Malloc disk", 00:23:01.629 "block_size": 512, 00:23:01.629 "num_blocks": 65536, 00:23:01.629 "uuid": "677928d6-9257-4a34-b264-45822535289d", 00:23:01.629 "assigned_rate_limits": { 00:23:01.629 "rw_ios_per_sec": 0, 00:23:01.629 "rw_mbytes_per_sec": 0, 00:23:01.629 "r_mbytes_per_sec": 0, 00:23:01.629 "w_mbytes_per_sec": 0 00:23:01.629 }, 00:23:01.629 "claimed": true, 00:23:01.629 "claim_type": "exclusive_write", 00:23:01.629 "zoned": false, 00:23:01.629 "supported_io_types": { 00:23:01.629 "read": true, 00:23:01.629 "write": true, 00:23:01.629 "unmap": true, 00:23:01.629 "flush": true, 00:23:01.629 "reset": true, 00:23:01.629 "nvme_admin": false, 00:23:01.629 "nvme_io": false, 00:23:01.629 "nvme_io_md": false, 00:23:01.629 "write_zeroes": true, 00:23:01.629 "zcopy": true, 00:23:01.629 "get_zone_info": false, 00:23:01.629 "zone_management": false, 00:23:01.629 "zone_append": false, 00:23:01.629 "compare": false, 00:23:01.629 "compare_and_write": false, 00:23:01.629 "abort": true, 00:23:01.629 "seek_hole": false, 00:23:01.629 "seek_data": false, 00:23:01.629 "copy": true, 00:23:01.629 "nvme_iov_md": false 00:23:01.629 }, 00:23:01.629 "memory_domains": [ 00:23:01.629 { 00:23:01.629 "dma_device_id": "system", 00:23:01.629 "dma_device_type": 1 00:23:01.629 }, 00:23:01.629 { 00:23:01.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:01.629 "dma_device_type": 2 00:23:01.629 } 00:23:01.629 ], 00:23:01.629 "driver_specific": {} 00:23:01.629 } 00:23:01.629 ] 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:01.629 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:01.630 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:01.630 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:01.630 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:01.630 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:01.630 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:01.630 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.630 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:01.630 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.630 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.630 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.887 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:01.887 "name": "Existed_Raid", 00:23:01.887 "uuid": "411c27c7-f16f-4222-97d0-2a5894efb503", 00:23:01.887 "strip_size_kb": 64, 00:23:01.887 "state": "configuring", 00:23:01.887 "raid_level": "concat", 00:23:01.887 "superblock": true, 00:23:01.887 "num_base_bdevs": 3, 00:23:01.887 "num_base_bdevs_discovered": 2, 00:23:01.887 "num_base_bdevs_operational": 3, 00:23:01.887 "base_bdevs_list": [ 00:23:01.887 { 00:23:01.887 "name": "BaseBdev1", 00:23:01.887 "uuid": "677928d6-9257-4a34-b264-45822535289d", 00:23:01.887 "is_configured": true, 00:23:01.887 "data_offset": 2048, 00:23:01.887 "data_size": 63488 00:23:01.888 }, 00:23:01.888 { 00:23:01.888 "name": null, 00:23:01.888 "uuid": "1e37abe0-cf94-4586-8505-c5b50e1415d4", 00:23:01.888 "is_configured": false, 00:23:01.888 "data_offset": 0, 00:23:01.888 "data_size": 63488 00:23:01.888 }, 00:23:01.888 { 00:23:01.888 "name": "BaseBdev3", 00:23:01.888 "uuid": "5a94bc75-4204-48be-9af9-6da60b033682", 00:23:01.888 "is_configured": true, 00:23:01.888 "data_offset": 2048, 00:23:01.888 "data_size": 63488 00:23:01.888 } 00:23:01.888 ] 00:23:01.888 }' 00:23:01.888 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:01.888 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:02.146 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:02.146 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.146 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.146 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:02.146 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.146 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:23:02.146 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:23:02.146 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.146 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:02.146 [2024-11-20 13:43:04.993583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:02.146 13:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.146 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:23:02.146 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:02.146 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:02.146 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:02.146 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:02.146 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:02.146 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:02.146 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:02.146 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:02.146 13:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:02.146 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:02.146 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.146 13:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.146 13:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:02.146 13:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.146 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:02.146 "name": "Existed_Raid", 00:23:02.146 "uuid": "411c27c7-f16f-4222-97d0-2a5894efb503", 00:23:02.146 "strip_size_kb": 64, 00:23:02.146 "state": "configuring", 00:23:02.146 "raid_level": "concat", 00:23:02.146 "superblock": true, 00:23:02.146 "num_base_bdevs": 3, 00:23:02.146 "num_base_bdevs_discovered": 1, 00:23:02.146 "num_base_bdevs_operational": 3, 00:23:02.146 "base_bdevs_list": [ 00:23:02.146 { 00:23:02.146 "name": "BaseBdev1", 00:23:02.146 "uuid": "677928d6-9257-4a34-b264-45822535289d", 00:23:02.146 "is_configured": true, 00:23:02.146 "data_offset": 2048, 00:23:02.146 "data_size": 63488 00:23:02.146 }, 00:23:02.146 { 00:23:02.146 "name": null, 00:23:02.146 "uuid": "1e37abe0-cf94-4586-8505-c5b50e1415d4", 00:23:02.146 "is_configured": false, 00:23:02.146 "data_offset": 0, 00:23:02.146 "data_size": 63488 00:23:02.146 }, 00:23:02.146 { 00:23:02.146 "name": null, 00:23:02.146 "uuid": "5a94bc75-4204-48be-9af9-6da60b033682", 00:23:02.146 "is_configured": false, 00:23:02.146 "data_offset": 0, 00:23:02.146 "data_size": 63488 00:23:02.146 } 00:23:02.146 ] 00:23:02.146 }' 00:23:02.146 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:02.146 13:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:02.713 [2024-11-20 13:43:05.525754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:02.713 "name": "Existed_Raid", 00:23:02.713 "uuid": "411c27c7-f16f-4222-97d0-2a5894efb503", 00:23:02.713 "strip_size_kb": 64, 00:23:02.713 "state": "configuring", 00:23:02.713 "raid_level": "concat", 00:23:02.713 "superblock": true, 00:23:02.713 "num_base_bdevs": 3, 00:23:02.713 "num_base_bdevs_discovered": 2, 00:23:02.713 "num_base_bdevs_operational": 3, 00:23:02.713 "base_bdevs_list": [ 00:23:02.713 { 00:23:02.713 "name": "BaseBdev1", 00:23:02.713 "uuid": "677928d6-9257-4a34-b264-45822535289d", 00:23:02.713 "is_configured": true, 00:23:02.713 "data_offset": 2048, 00:23:02.713 "data_size": 63488 00:23:02.713 }, 00:23:02.713 { 00:23:02.713 "name": null, 00:23:02.713 "uuid": "1e37abe0-cf94-4586-8505-c5b50e1415d4", 00:23:02.713 "is_configured": false, 00:23:02.713 "data_offset": 0, 00:23:02.713 "data_size": 63488 00:23:02.713 }, 00:23:02.713 { 00:23:02.713 "name": "BaseBdev3", 00:23:02.713 "uuid": "5a94bc75-4204-48be-9af9-6da60b033682", 00:23:02.713 "is_configured": true, 00:23:02.713 "data_offset": 2048, 00:23:02.713 "data_size": 63488 00:23:02.713 } 00:23:02.713 ] 00:23:02.713 }' 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:02.713 13:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.282 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.282 13:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:03.282 13:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.282 13:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.282 13:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.282 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:23:03.282 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:03.282 13:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.282 13:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.282 [2024-11-20 13:43:06.041938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:03.282 13:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.282 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:23:03.282 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:03.282 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:03.282 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:03.282 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:03.282 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:03.282 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:03.282 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:03.282 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:03.282 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:03.282 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.282 13:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.282 13:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.282 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:03.282 13:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.282 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:03.282 "name": "Existed_Raid", 00:23:03.282 "uuid": "411c27c7-f16f-4222-97d0-2a5894efb503", 00:23:03.282 "strip_size_kb": 64, 00:23:03.282 "state": "configuring", 00:23:03.282 "raid_level": "concat", 00:23:03.282 "superblock": true, 00:23:03.282 "num_base_bdevs": 3, 00:23:03.282 "num_base_bdevs_discovered": 1, 00:23:03.282 "num_base_bdevs_operational": 3, 00:23:03.282 "base_bdevs_list": [ 00:23:03.282 { 00:23:03.282 "name": null, 00:23:03.282 "uuid": "677928d6-9257-4a34-b264-45822535289d", 00:23:03.282 "is_configured": false, 00:23:03.282 "data_offset": 0, 00:23:03.283 "data_size": 63488 00:23:03.283 }, 00:23:03.283 { 00:23:03.283 "name": null, 00:23:03.283 "uuid": "1e37abe0-cf94-4586-8505-c5b50e1415d4", 00:23:03.283 "is_configured": false, 00:23:03.283 "data_offset": 0, 00:23:03.283 "data_size": 63488 00:23:03.283 }, 00:23:03.283 { 00:23:03.283 "name": "BaseBdev3", 00:23:03.283 "uuid": "5a94bc75-4204-48be-9af9-6da60b033682", 00:23:03.283 "is_configured": true, 00:23:03.283 "data_offset": 2048, 00:23:03.283 "data_size": 63488 00:23:03.283 } 00:23:03.283 ] 00:23:03.283 }' 00:23:03.283 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:03.283 13:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.851 [2024-11-20 13:43:06.645149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:03.851 "name": "Existed_Raid", 00:23:03.851 "uuid": "411c27c7-f16f-4222-97d0-2a5894efb503", 00:23:03.851 "strip_size_kb": 64, 00:23:03.851 "state": "configuring", 00:23:03.851 "raid_level": "concat", 00:23:03.851 "superblock": true, 00:23:03.851 "num_base_bdevs": 3, 00:23:03.851 "num_base_bdevs_discovered": 2, 00:23:03.851 "num_base_bdevs_operational": 3, 00:23:03.851 "base_bdevs_list": [ 00:23:03.851 { 00:23:03.851 "name": null, 00:23:03.851 "uuid": "677928d6-9257-4a34-b264-45822535289d", 00:23:03.851 "is_configured": false, 00:23:03.851 "data_offset": 0, 00:23:03.851 "data_size": 63488 00:23:03.851 }, 00:23:03.851 { 00:23:03.851 "name": "BaseBdev2", 00:23:03.851 "uuid": "1e37abe0-cf94-4586-8505-c5b50e1415d4", 00:23:03.851 "is_configured": true, 00:23:03.851 "data_offset": 2048, 00:23:03.851 "data_size": 63488 00:23:03.851 }, 00:23:03.851 { 00:23:03.851 "name": "BaseBdev3", 00:23:03.851 "uuid": "5a94bc75-4204-48be-9af9-6da60b033682", 00:23:03.851 "is_configured": true, 00:23:03.851 "data_offset": 2048, 00:23:03.851 "data_size": 63488 00:23:03.851 } 00:23:03.851 ] 00:23:03.851 }' 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:03.851 13:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 677928d6-9257-4a34-b264-45822535289d 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.420 [2024-11-20 13:43:07.295604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:04.420 [2024-11-20 13:43:07.295887] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:04.420 [2024-11-20 13:43:07.295935] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:04.420 NewBaseBdev 00:23:04.420 [2024-11-20 13:43:07.296265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:04.420 [2024-11-20 13:43:07.296452] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:04.420 [2024-11-20 13:43:07.296469] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:23:04.420 [2024-11-20 13:43:07.296640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.420 [ 00:23:04.420 { 00:23:04.420 "name": "NewBaseBdev", 00:23:04.420 "aliases": [ 00:23:04.420 "677928d6-9257-4a34-b264-45822535289d" 00:23:04.420 ], 00:23:04.420 "product_name": "Malloc disk", 00:23:04.420 "block_size": 512, 00:23:04.420 "num_blocks": 65536, 00:23:04.420 "uuid": "677928d6-9257-4a34-b264-45822535289d", 00:23:04.420 "assigned_rate_limits": { 00:23:04.420 "rw_ios_per_sec": 0, 00:23:04.420 "rw_mbytes_per_sec": 0, 00:23:04.420 "r_mbytes_per_sec": 0, 00:23:04.420 "w_mbytes_per_sec": 0 00:23:04.420 }, 00:23:04.420 "claimed": true, 00:23:04.420 "claim_type": "exclusive_write", 00:23:04.420 "zoned": false, 00:23:04.420 "supported_io_types": { 00:23:04.420 "read": true, 00:23:04.420 "write": true, 00:23:04.420 "unmap": true, 00:23:04.420 "flush": true, 00:23:04.420 "reset": true, 00:23:04.420 "nvme_admin": false, 00:23:04.420 "nvme_io": false, 00:23:04.420 "nvme_io_md": false, 00:23:04.420 "write_zeroes": true, 00:23:04.420 "zcopy": true, 00:23:04.420 "get_zone_info": false, 00:23:04.420 "zone_management": false, 00:23:04.420 "zone_append": false, 00:23:04.420 "compare": false, 00:23:04.420 "compare_and_write": false, 00:23:04.420 "abort": true, 00:23:04.420 "seek_hole": false, 00:23:04.420 "seek_data": false, 00:23:04.420 "copy": true, 00:23:04.420 "nvme_iov_md": false 00:23:04.420 }, 00:23:04.420 "memory_domains": [ 00:23:04.420 { 00:23:04.420 "dma_device_id": "system", 00:23:04.420 "dma_device_type": 1 00:23:04.420 }, 00:23:04.420 { 00:23:04.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:04.420 "dma_device_type": 2 00:23:04.420 } 00:23:04.420 ], 00:23:04.420 "driver_specific": {} 00:23:04.420 } 00:23:04.420 ] 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:04.420 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:04.679 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:04.679 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.679 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.679 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.679 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.679 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:04.679 "name": "Existed_Raid", 00:23:04.679 "uuid": "411c27c7-f16f-4222-97d0-2a5894efb503", 00:23:04.679 "strip_size_kb": 64, 00:23:04.679 "state": "online", 00:23:04.679 "raid_level": "concat", 00:23:04.679 "superblock": true, 00:23:04.679 "num_base_bdevs": 3, 00:23:04.679 "num_base_bdevs_discovered": 3, 00:23:04.679 "num_base_bdevs_operational": 3, 00:23:04.679 "base_bdevs_list": [ 00:23:04.679 { 00:23:04.679 "name": "NewBaseBdev", 00:23:04.679 "uuid": "677928d6-9257-4a34-b264-45822535289d", 00:23:04.679 "is_configured": true, 00:23:04.679 "data_offset": 2048, 00:23:04.679 "data_size": 63488 00:23:04.679 }, 00:23:04.679 { 00:23:04.679 "name": "BaseBdev2", 00:23:04.679 "uuid": "1e37abe0-cf94-4586-8505-c5b50e1415d4", 00:23:04.679 "is_configured": true, 00:23:04.679 "data_offset": 2048, 00:23:04.679 "data_size": 63488 00:23:04.679 }, 00:23:04.679 { 00:23:04.679 "name": "BaseBdev3", 00:23:04.679 "uuid": "5a94bc75-4204-48be-9af9-6da60b033682", 00:23:04.679 "is_configured": true, 00:23:04.679 "data_offset": 2048, 00:23:04.679 "data_size": 63488 00:23:04.679 } 00:23:04.679 ] 00:23:04.679 }' 00:23:04.679 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:04.679 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.248 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:23:05.248 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:05.248 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:05.248 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:05.248 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:23:05.248 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:05.248 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:05.248 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:05.248 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.248 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.248 [2024-11-20 13:43:07.932229] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:05.248 13:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.248 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:05.248 "name": "Existed_Raid", 00:23:05.248 "aliases": [ 00:23:05.248 "411c27c7-f16f-4222-97d0-2a5894efb503" 00:23:05.248 ], 00:23:05.248 "product_name": "Raid Volume", 00:23:05.248 "block_size": 512, 00:23:05.248 "num_blocks": 190464, 00:23:05.248 "uuid": "411c27c7-f16f-4222-97d0-2a5894efb503", 00:23:05.248 "assigned_rate_limits": { 00:23:05.248 "rw_ios_per_sec": 0, 00:23:05.248 "rw_mbytes_per_sec": 0, 00:23:05.248 "r_mbytes_per_sec": 0, 00:23:05.248 "w_mbytes_per_sec": 0 00:23:05.248 }, 00:23:05.248 "claimed": false, 00:23:05.248 "zoned": false, 00:23:05.248 "supported_io_types": { 00:23:05.248 "read": true, 00:23:05.248 "write": true, 00:23:05.248 "unmap": true, 00:23:05.248 "flush": true, 00:23:05.248 "reset": true, 00:23:05.248 "nvme_admin": false, 00:23:05.248 "nvme_io": false, 00:23:05.248 "nvme_io_md": false, 00:23:05.248 "write_zeroes": true, 00:23:05.248 "zcopy": false, 00:23:05.248 "get_zone_info": false, 00:23:05.248 "zone_management": false, 00:23:05.248 "zone_append": false, 00:23:05.248 "compare": false, 00:23:05.248 "compare_and_write": false, 00:23:05.248 "abort": false, 00:23:05.248 "seek_hole": false, 00:23:05.248 "seek_data": false, 00:23:05.248 "copy": false, 00:23:05.248 "nvme_iov_md": false 00:23:05.248 }, 00:23:05.248 "memory_domains": [ 00:23:05.248 { 00:23:05.248 "dma_device_id": "system", 00:23:05.248 "dma_device_type": 1 00:23:05.248 }, 00:23:05.248 { 00:23:05.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:05.248 "dma_device_type": 2 00:23:05.248 }, 00:23:05.248 { 00:23:05.248 "dma_device_id": "system", 00:23:05.248 "dma_device_type": 1 00:23:05.248 }, 00:23:05.248 { 00:23:05.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:05.248 "dma_device_type": 2 00:23:05.248 }, 00:23:05.248 { 00:23:05.248 "dma_device_id": "system", 00:23:05.248 "dma_device_type": 1 00:23:05.248 }, 00:23:05.248 { 00:23:05.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:05.248 "dma_device_type": 2 00:23:05.248 } 00:23:05.248 ], 00:23:05.248 "driver_specific": { 00:23:05.248 "raid": { 00:23:05.248 "uuid": "411c27c7-f16f-4222-97d0-2a5894efb503", 00:23:05.248 "strip_size_kb": 64, 00:23:05.248 "state": "online", 00:23:05.248 "raid_level": "concat", 00:23:05.248 "superblock": true, 00:23:05.248 "num_base_bdevs": 3, 00:23:05.248 "num_base_bdevs_discovered": 3, 00:23:05.248 "num_base_bdevs_operational": 3, 00:23:05.248 "base_bdevs_list": [ 00:23:05.248 { 00:23:05.248 "name": "NewBaseBdev", 00:23:05.248 "uuid": "677928d6-9257-4a34-b264-45822535289d", 00:23:05.248 "is_configured": true, 00:23:05.248 "data_offset": 2048, 00:23:05.248 "data_size": 63488 00:23:05.248 }, 00:23:05.248 { 00:23:05.248 "name": "BaseBdev2", 00:23:05.248 "uuid": "1e37abe0-cf94-4586-8505-c5b50e1415d4", 00:23:05.248 "is_configured": true, 00:23:05.248 "data_offset": 2048, 00:23:05.248 "data_size": 63488 00:23:05.248 }, 00:23:05.248 { 00:23:05.248 "name": "BaseBdev3", 00:23:05.248 "uuid": "5a94bc75-4204-48be-9af9-6da60b033682", 00:23:05.248 "is_configured": true, 00:23:05.248 "data_offset": 2048, 00:23:05.248 "data_size": 63488 00:23:05.248 } 00:23:05.248 ] 00:23:05.248 } 00:23:05.248 } 00:23:05.248 }' 00:23:05.248 13:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:05.248 13:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:23:05.248 BaseBdev2 00:23:05.248 BaseBdev3' 00:23:05.248 13:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:05.248 13:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:05.248 13:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:05.249 13:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:05.249 13:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:23:05.249 13:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.249 13:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.249 13:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.249 13:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:05.249 13:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:05.249 13:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:05.249 13:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:05.249 13:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.249 13:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:05.249 13:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.507 [2024-11-20 13:43:08.259931] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:05.507 [2024-11-20 13:43:08.259976] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:05.507 [2024-11-20 13:43:08.260086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:05.507 [2024-11-20 13:43:08.260163] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:05.507 [2024-11-20 13:43:08.260184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66440 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66440 ']' 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66440 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66440 00:23:05.507 killing process with pid 66440 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66440' 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66440 00:23:05.507 [2024-11-20 13:43:08.298164] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:05.507 13:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66440 00:23:05.766 [2024-11-20 13:43:08.574766] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:07.142 13:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:23:07.142 00:23:07.142 real 0m11.780s 00:23:07.142 user 0m19.502s 00:23:07.142 sys 0m1.635s 00:23:07.142 13:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:07.142 13:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.142 ************************************ 00:23:07.142 END TEST raid_state_function_test_sb 00:23:07.142 ************************************ 00:23:07.142 13:43:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:23:07.142 13:43:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:07.142 13:43:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:07.142 13:43:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:07.142 ************************************ 00:23:07.142 START TEST raid_superblock_test 00:23:07.142 ************************************ 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67077 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67077 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67077 ']' 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.142 13:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.142 [2024-11-20 13:43:09.796649] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:23:07.142 [2024-11-20 13:43:09.796847] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67077 ] 00:23:07.142 [2024-11-20 13:43:09.973858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.400 [2024-11-20 13:43:10.103579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.400 [2024-11-20 13:43:10.305887] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:07.400 [2024-11-20 13:43:10.305977] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:07.965 13:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.965 13:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:23:07.965 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:07.965 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:07.965 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:07.965 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:07.965 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:07.965 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:07.965 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:07.965 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:07.965 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:23:07.965 13:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.965 13:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.965 malloc1 00:23:07.965 13:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.966 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:07.966 13:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.966 13:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.966 [2024-11-20 13:43:10.879180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:07.966 [2024-11-20 13:43:10.879264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.966 [2024-11-20 13:43:10.879298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:07.966 [2024-11-20 13:43:10.879313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.225 [2024-11-20 13:43:10.882248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.225 [2024-11-20 13:43:10.882295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:08.225 pt1 00:23:08.225 13:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.225 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:08.225 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:08.225 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:08.225 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:08.225 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:08.225 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:08.225 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:08.225 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:08.225 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:23:08.225 13:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.225 13:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.225 malloc2 00:23:08.225 13:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.225 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:08.225 13:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.225 13:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.225 [2024-11-20 13:43:10.935330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:08.225 [2024-11-20 13:43:10.935409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.225 [2024-11-20 13:43:10.935448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:08.225 [2024-11-20 13:43:10.935464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.225 [2024-11-20 13:43:10.938364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.225 [2024-11-20 13:43:10.938410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:08.225 pt2 00:23:08.225 13:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.225 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:08.225 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:08.225 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:23:08.225 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:23:08.226 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:08.226 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:08.226 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:08.226 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:08.226 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:23:08.226 13:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.226 13:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.226 malloc3 00:23:08.226 13:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.226 13:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:08.226 13:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.226 13:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.226 [2024-11-20 13:43:10.999676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:08.226 [2024-11-20 13:43:10.999760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.226 [2024-11-20 13:43:10.999796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:08.226 [2024-11-20 13:43:10.999813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.226 [2024-11-20 13:43:11.002746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.226 [2024-11-20 13:43:11.002794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:08.226 pt3 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.226 [2024-11-20 13:43:11.011785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:08.226 [2024-11-20 13:43:11.014353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:08.226 [2024-11-20 13:43:11.014461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:08.226 [2024-11-20 13:43:11.014707] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:08.226 [2024-11-20 13:43:11.014750] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:08.226 [2024-11-20 13:43:11.015185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:08.226 [2024-11-20 13:43:11.015444] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:08.226 [2024-11-20 13:43:11.015465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:08.226 [2024-11-20 13:43:11.015760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:08.226 "name": "raid_bdev1", 00:23:08.226 "uuid": "fd296180-486e-4078-b08b-a262414c3bd0", 00:23:08.226 "strip_size_kb": 64, 00:23:08.226 "state": "online", 00:23:08.226 "raid_level": "concat", 00:23:08.226 "superblock": true, 00:23:08.226 "num_base_bdevs": 3, 00:23:08.226 "num_base_bdevs_discovered": 3, 00:23:08.226 "num_base_bdevs_operational": 3, 00:23:08.226 "base_bdevs_list": [ 00:23:08.226 { 00:23:08.226 "name": "pt1", 00:23:08.226 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:08.226 "is_configured": true, 00:23:08.226 "data_offset": 2048, 00:23:08.226 "data_size": 63488 00:23:08.226 }, 00:23:08.226 { 00:23:08.226 "name": "pt2", 00:23:08.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:08.226 "is_configured": true, 00:23:08.226 "data_offset": 2048, 00:23:08.226 "data_size": 63488 00:23:08.226 }, 00:23:08.226 { 00:23:08.226 "name": "pt3", 00:23:08.226 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:08.226 "is_configured": true, 00:23:08.226 "data_offset": 2048, 00:23:08.226 "data_size": 63488 00:23:08.226 } 00:23:08.226 ] 00:23:08.226 }' 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:08.226 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.809 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:08.809 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:08.809 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:08.809 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:08.809 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:08.809 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:08.809 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:08.809 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.809 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:08.809 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.809 [2024-11-20 13:43:11.504326] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:08.810 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.810 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:08.810 "name": "raid_bdev1", 00:23:08.810 "aliases": [ 00:23:08.810 "fd296180-486e-4078-b08b-a262414c3bd0" 00:23:08.810 ], 00:23:08.810 "product_name": "Raid Volume", 00:23:08.810 "block_size": 512, 00:23:08.810 "num_blocks": 190464, 00:23:08.810 "uuid": "fd296180-486e-4078-b08b-a262414c3bd0", 00:23:08.810 "assigned_rate_limits": { 00:23:08.810 "rw_ios_per_sec": 0, 00:23:08.810 "rw_mbytes_per_sec": 0, 00:23:08.810 "r_mbytes_per_sec": 0, 00:23:08.810 "w_mbytes_per_sec": 0 00:23:08.810 }, 00:23:08.810 "claimed": false, 00:23:08.810 "zoned": false, 00:23:08.810 "supported_io_types": { 00:23:08.810 "read": true, 00:23:08.810 "write": true, 00:23:08.810 "unmap": true, 00:23:08.810 "flush": true, 00:23:08.810 "reset": true, 00:23:08.810 "nvme_admin": false, 00:23:08.810 "nvme_io": false, 00:23:08.810 "nvme_io_md": false, 00:23:08.810 "write_zeroes": true, 00:23:08.810 "zcopy": false, 00:23:08.810 "get_zone_info": false, 00:23:08.810 "zone_management": false, 00:23:08.810 "zone_append": false, 00:23:08.810 "compare": false, 00:23:08.810 "compare_and_write": false, 00:23:08.810 "abort": false, 00:23:08.810 "seek_hole": false, 00:23:08.810 "seek_data": false, 00:23:08.810 "copy": false, 00:23:08.810 "nvme_iov_md": false 00:23:08.810 }, 00:23:08.810 "memory_domains": [ 00:23:08.810 { 00:23:08.810 "dma_device_id": "system", 00:23:08.810 "dma_device_type": 1 00:23:08.810 }, 00:23:08.810 { 00:23:08.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.810 "dma_device_type": 2 00:23:08.810 }, 00:23:08.810 { 00:23:08.810 "dma_device_id": "system", 00:23:08.810 "dma_device_type": 1 00:23:08.810 }, 00:23:08.810 { 00:23:08.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.810 "dma_device_type": 2 00:23:08.810 }, 00:23:08.810 { 00:23:08.810 "dma_device_id": "system", 00:23:08.810 "dma_device_type": 1 00:23:08.810 }, 00:23:08.810 { 00:23:08.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:08.810 "dma_device_type": 2 00:23:08.810 } 00:23:08.810 ], 00:23:08.810 "driver_specific": { 00:23:08.810 "raid": { 00:23:08.810 "uuid": "fd296180-486e-4078-b08b-a262414c3bd0", 00:23:08.810 "strip_size_kb": 64, 00:23:08.810 "state": "online", 00:23:08.810 "raid_level": "concat", 00:23:08.810 "superblock": true, 00:23:08.810 "num_base_bdevs": 3, 00:23:08.810 "num_base_bdevs_discovered": 3, 00:23:08.810 "num_base_bdevs_operational": 3, 00:23:08.810 "base_bdevs_list": [ 00:23:08.810 { 00:23:08.810 "name": "pt1", 00:23:08.810 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:08.810 "is_configured": true, 00:23:08.810 "data_offset": 2048, 00:23:08.810 "data_size": 63488 00:23:08.810 }, 00:23:08.810 { 00:23:08.810 "name": "pt2", 00:23:08.810 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:08.810 "is_configured": true, 00:23:08.810 "data_offset": 2048, 00:23:08.810 "data_size": 63488 00:23:08.810 }, 00:23:08.810 { 00:23:08.810 "name": "pt3", 00:23:08.810 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:08.810 "is_configured": true, 00:23:08.810 "data_offset": 2048, 00:23:08.810 "data_size": 63488 00:23:08.810 } 00:23:08.810 ] 00:23:08.810 } 00:23:08.810 } 00:23:08.810 }' 00:23:08.810 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:08.810 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:08.810 pt2 00:23:08.810 pt3' 00:23:08.810 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:08.810 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:08.810 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:08.810 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:08.810 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.810 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.810 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:08.810 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.810 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:08.810 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:08.810 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:08.810 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:08.810 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.810 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:08.810 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.068 [2024-11-20 13:43:11.840381] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fd296180-486e-4078-b08b-a262414c3bd0 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fd296180-486e-4078-b08b-a262414c3bd0 ']' 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.068 [2024-11-20 13:43:11.884043] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:09.068 [2024-11-20 13:43:11.884086] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:09.068 [2024-11-20 13:43:11.884197] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:09.068 [2024-11-20 13:43:11.884308] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:09.068 [2024-11-20 13:43:11.884328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.068 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:09.069 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:09.069 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:09.069 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:23:09.069 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.069 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.069 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.069 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:09.069 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:23:09.069 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.069 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.069 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.069 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:09.069 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:23:09.069 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.069 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.069 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.069 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:23:09.069 13:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:09.069 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.069 13:43:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.326 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.326 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:09.326 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:23:09.326 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:23:09.326 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:23:09.326 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:09.326 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:09.326 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:09.326 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:09.326 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:23:09.326 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.326 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.326 [2024-11-20 13:43:12.060160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:09.326 [2024-11-20 13:43:12.062629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:09.326 [2024-11-20 13:43:12.062711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:09.326 [2024-11-20 13:43:12.062791] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:09.326 [2024-11-20 13:43:12.062913] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:09.326 [2024-11-20 13:43:12.062960] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:09.326 [2024-11-20 13:43:12.062991] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:09.326 [2024-11-20 13:43:12.063005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:09.326 request: 00:23:09.326 { 00:23:09.326 "name": "raid_bdev1", 00:23:09.326 "raid_level": "concat", 00:23:09.326 "base_bdevs": [ 00:23:09.326 "malloc1", 00:23:09.326 "malloc2", 00:23:09.326 "malloc3" 00:23:09.326 ], 00:23:09.326 "strip_size_kb": 64, 00:23:09.326 "superblock": false, 00:23:09.326 "method": "bdev_raid_create", 00:23:09.326 "req_id": 1 00:23:09.326 } 00:23:09.326 Got JSON-RPC error response 00:23:09.326 response: 00:23:09.326 { 00:23:09.326 "code": -17, 00:23:09.326 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:09.326 } 00:23:09.326 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:09.326 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:23:09.326 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:09.326 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:09.326 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.327 [2024-11-20 13:43:12.120092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:09.327 [2024-11-20 13:43:12.120172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.327 [2024-11-20 13:43:12.120210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:09.327 [2024-11-20 13:43:12.120225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.327 [2024-11-20 13:43:12.123117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.327 [2024-11-20 13:43:12.123164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:09.327 [2024-11-20 13:43:12.123288] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:09.327 [2024-11-20 13:43:12.123382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:09.327 pt1 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:09.327 "name": "raid_bdev1", 00:23:09.327 "uuid": "fd296180-486e-4078-b08b-a262414c3bd0", 00:23:09.327 "strip_size_kb": 64, 00:23:09.327 "state": "configuring", 00:23:09.327 "raid_level": "concat", 00:23:09.327 "superblock": true, 00:23:09.327 "num_base_bdevs": 3, 00:23:09.327 "num_base_bdevs_discovered": 1, 00:23:09.327 "num_base_bdevs_operational": 3, 00:23:09.327 "base_bdevs_list": [ 00:23:09.327 { 00:23:09.327 "name": "pt1", 00:23:09.327 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:09.327 "is_configured": true, 00:23:09.327 "data_offset": 2048, 00:23:09.327 "data_size": 63488 00:23:09.327 }, 00:23:09.327 { 00:23:09.327 "name": null, 00:23:09.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:09.327 "is_configured": false, 00:23:09.327 "data_offset": 2048, 00:23:09.327 "data_size": 63488 00:23:09.327 }, 00:23:09.327 { 00:23:09.327 "name": null, 00:23:09.327 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:09.327 "is_configured": false, 00:23:09.327 "data_offset": 2048, 00:23:09.327 "data_size": 63488 00:23:09.327 } 00:23:09.327 ] 00:23:09.327 }' 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:09.327 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.892 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.893 [2024-11-20 13:43:12.672241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:09.893 [2024-11-20 13:43:12.672331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.893 [2024-11-20 13:43:12.672371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:23:09.893 [2024-11-20 13:43:12.672387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.893 [2024-11-20 13:43:12.672990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.893 [2024-11-20 13:43:12.673025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:09.893 [2024-11-20 13:43:12.673158] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:09.893 [2024-11-20 13:43:12.673210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:09.893 pt2 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.893 [2024-11-20 13:43:12.680225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:09.893 "name": "raid_bdev1", 00:23:09.893 "uuid": "fd296180-486e-4078-b08b-a262414c3bd0", 00:23:09.893 "strip_size_kb": 64, 00:23:09.893 "state": "configuring", 00:23:09.893 "raid_level": "concat", 00:23:09.893 "superblock": true, 00:23:09.893 "num_base_bdevs": 3, 00:23:09.893 "num_base_bdevs_discovered": 1, 00:23:09.893 "num_base_bdevs_operational": 3, 00:23:09.893 "base_bdevs_list": [ 00:23:09.893 { 00:23:09.893 "name": "pt1", 00:23:09.893 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:09.893 "is_configured": true, 00:23:09.893 "data_offset": 2048, 00:23:09.893 "data_size": 63488 00:23:09.893 }, 00:23:09.893 { 00:23:09.893 "name": null, 00:23:09.893 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:09.893 "is_configured": false, 00:23:09.893 "data_offset": 0, 00:23:09.893 "data_size": 63488 00:23:09.893 }, 00:23:09.893 { 00:23:09.893 "name": null, 00:23:09.893 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:09.893 "is_configured": false, 00:23:09.893 "data_offset": 2048, 00:23:09.893 "data_size": 63488 00:23:09.893 } 00:23:09.893 ] 00:23:09.893 }' 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:09.893 13:43:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.460 [2024-11-20 13:43:13.188339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:10.460 [2024-11-20 13:43:13.188425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:10.460 [2024-11-20 13:43:13.188453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:10.460 [2024-11-20 13:43:13.188471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:10.460 [2024-11-20 13:43:13.189090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:10.460 [2024-11-20 13:43:13.189133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:10.460 [2024-11-20 13:43:13.189249] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:10.460 [2024-11-20 13:43:13.189298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:10.460 pt2 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.460 [2024-11-20 13:43:13.196334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:10.460 [2024-11-20 13:43:13.196394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:10.460 [2024-11-20 13:43:13.196416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:10.460 [2024-11-20 13:43:13.196431] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:10.460 [2024-11-20 13:43:13.196946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:10.460 [2024-11-20 13:43:13.196995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:10.460 [2024-11-20 13:43:13.197089] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:10.460 [2024-11-20 13:43:13.197134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:10.460 [2024-11-20 13:43:13.197320] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:10.460 [2024-11-20 13:43:13.197348] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:10.460 [2024-11-20 13:43:13.197698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:10.460 [2024-11-20 13:43:13.197943] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:10.460 [2024-11-20 13:43:13.197968] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:10.460 [2024-11-20 13:43:13.198169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:10.460 pt3 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:10.460 "name": "raid_bdev1", 00:23:10.460 "uuid": "fd296180-486e-4078-b08b-a262414c3bd0", 00:23:10.460 "strip_size_kb": 64, 00:23:10.460 "state": "online", 00:23:10.460 "raid_level": "concat", 00:23:10.460 "superblock": true, 00:23:10.460 "num_base_bdevs": 3, 00:23:10.460 "num_base_bdevs_discovered": 3, 00:23:10.460 "num_base_bdevs_operational": 3, 00:23:10.460 "base_bdevs_list": [ 00:23:10.460 { 00:23:10.460 "name": "pt1", 00:23:10.460 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:10.460 "is_configured": true, 00:23:10.460 "data_offset": 2048, 00:23:10.460 "data_size": 63488 00:23:10.460 }, 00:23:10.460 { 00:23:10.460 "name": "pt2", 00:23:10.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:10.460 "is_configured": true, 00:23:10.460 "data_offset": 2048, 00:23:10.460 "data_size": 63488 00:23:10.460 }, 00:23:10.460 { 00:23:10.460 "name": "pt3", 00:23:10.460 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:10.460 "is_configured": true, 00:23:10.460 "data_offset": 2048, 00:23:10.460 "data_size": 63488 00:23:10.460 } 00:23:10.460 ] 00:23:10.460 }' 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:10.460 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.027 [2024-11-20 13:43:13.688870] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:11.027 "name": "raid_bdev1", 00:23:11.027 "aliases": [ 00:23:11.027 "fd296180-486e-4078-b08b-a262414c3bd0" 00:23:11.027 ], 00:23:11.027 "product_name": "Raid Volume", 00:23:11.027 "block_size": 512, 00:23:11.027 "num_blocks": 190464, 00:23:11.027 "uuid": "fd296180-486e-4078-b08b-a262414c3bd0", 00:23:11.027 "assigned_rate_limits": { 00:23:11.027 "rw_ios_per_sec": 0, 00:23:11.027 "rw_mbytes_per_sec": 0, 00:23:11.027 "r_mbytes_per_sec": 0, 00:23:11.027 "w_mbytes_per_sec": 0 00:23:11.027 }, 00:23:11.027 "claimed": false, 00:23:11.027 "zoned": false, 00:23:11.027 "supported_io_types": { 00:23:11.027 "read": true, 00:23:11.027 "write": true, 00:23:11.027 "unmap": true, 00:23:11.027 "flush": true, 00:23:11.027 "reset": true, 00:23:11.027 "nvme_admin": false, 00:23:11.027 "nvme_io": false, 00:23:11.027 "nvme_io_md": false, 00:23:11.027 "write_zeroes": true, 00:23:11.027 "zcopy": false, 00:23:11.027 "get_zone_info": false, 00:23:11.027 "zone_management": false, 00:23:11.027 "zone_append": false, 00:23:11.027 "compare": false, 00:23:11.027 "compare_and_write": false, 00:23:11.027 "abort": false, 00:23:11.027 "seek_hole": false, 00:23:11.027 "seek_data": false, 00:23:11.027 "copy": false, 00:23:11.027 "nvme_iov_md": false 00:23:11.027 }, 00:23:11.027 "memory_domains": [ 00:23:11.027 { 00:23:11.027 "dma_device_id": "system", 00:23:11.027 "dma_device_type": 1 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.027 "dma_device_type": 2 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "dma_device_id": "system", 00:23:11.027 "dma_device_type": 1 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.027 "dma_device_type": 2 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "dma_device_id": "system", 00:23:11.027 "dma_device_type": 1 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.027 "dma_device_type": 2 00:23:11.027 } 00:23:11.027 ], 00:23:11.027 "driver_specific": { 00:23:11.027 "raid": { 00:23:11.027 "uuid": "fd296180-486e-4078-b08b-a262414c3bd0", 00:23:11.027 "strip_size_kb": 64, 00:23:11.027 "state": "online", 00:23:11.027 "raid_level": "concat", 00:23:11.027 "superblock": true, 00:23:11.027 "num_base_bdevs": 3, 00:23:11.027 "num_base_bdevs_discovered": 3, 00:23:11.027 "num_base_bdevs_operational": 3, 00:23:11.027 "base_bdevs_list": [ 00:23:11.027 { 00:23:11.027 "name": "pt1", 00:23:11.027 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:11.027 "is_configured": true, 00:23:11.027 "data_offset": 2048, 00:23:11.027 "data_size": 63488 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "name": "pt2", 00:23:11.027 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:11.027 "is_configured": true, 00:23:11.027 "data_offset": 2048, 00:23:11.027 "data_size": 63488 00:23:11.027 }, 00:23:11.027 { 00:23:11.027 "name": "pt3", 00:23:11.027 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:11.027 "is_configured": true, 00:23:11.027 "data_offset": 2048, 00:23:11.027 "data_size": 63488 00:23:11.027 } 00:23:11.027 ] 00:23:11.027 } 00:23:11.027 } 00:23:11.027 }' 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:11.027 pt2 00:23:11.027 pt3' 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:11.027 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:11.028 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:11.028 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.028 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.028 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:11.028 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.028 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:11.028 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:11.028 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:11.028 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:23:11.028 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:11.028 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.028 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.286 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.286 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:11.286 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:11.286 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:11.286 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.286 13:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.286 13:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:11.286 [2024-11-20 13:43:13.984954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:11.286 13:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.286 13:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fd296180-486e-4078-b08b-a262414c3bd0 '!=' fd296180-486e-4078-b08b-a262414c3bd0 ']' 00:23:11.286 13:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:23:11.286 13:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:11.286 13:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:23:11.286 13:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67077 00:23:11.286 13:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67077 ']' 00:23:11.286 13:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67077 00:23:11.286 13:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:23:11.286 13:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:11.286 13:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67077 00:23:11.286 13:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:11.286 13:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:11.286 killing process with pid 67077 00:23:11.286 13:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67077' 00:23:11.286 13:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67077 00:23:11.286 [2024-11-20 13:43:14.066471] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:11.286 13:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67077 00:23:11.286 [2024-11-20 13:43:14.066615] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:11.286 [2024-11-20 13:43:14.066716] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:11.286 [2024-11-20 13:43:14.066740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:11.545 [2024-11-20 13:43:14.340290] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:12.480 13:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:23:12.480 00:23:12.480 real 0m5.686s 00:23:12.480 user 0m8.545s 00:23:12.480 sys 0m0.861s 00:23:12.480 13:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:12.480 13:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.480 ************************************ 00:23:12.480 END TEST raid_superblock_test 00:23:12.480 ************************************ 00:23:12.738 13:43:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:23:12.738 13:43:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:12.738 13:43:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:12.738 13:43:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:12.738 ************************************ 00:23:12.738 START TEST raid_read_error_test 00:23:12.738 ************************************ 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CPjENj7p4O 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67336 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67336 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67336 ']' 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:12.738 13:43:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.738 [2024-11-20 13:43:15.531868] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:23:12.738 [2024-11-20 13:43:15.532044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67336 ] 00:23:12.997 [2024-11-20 13:43:15.704874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.997 [2024-11-20 13:43:15.834969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.255 [2024-11-20 13:43:16.037044] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:13.255 [2024-11-20 13:43:16.037094] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:13.822 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:13.822 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:23:13.822 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:13.822 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.823 BaseBdev1_malloc 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.823 true 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.823 [2024-11-20 13:43:16.666929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:13.823 [2024-11-20 13:43:16.666997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:13.823 [2024-11-20 13:43:16.667029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:13.823 [2024-11-20 13:43:16.667047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:13.823 [2024-11-20 13:43:16.669876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:13.823 [2024-11-20 13:43:16.669940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:13.823 BaseBdev1 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.823 BaseBdev2_malloc 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.823 true 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:13.823 [2024-11-20 13:43:16.731428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:13.823 [2024-11-20 13:43:16.731502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:13.823 [2024-11-20 13:43:16.731528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:13.823 [2024-11-20 13:43:16.731547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:13.823 [2024-11-20 13:43:16.734455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:13.823 [2024-11-20 13:43:16.734504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:13.823 BaseBdev2 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.823 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.081 BaseBdev3_malloc 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.081 true 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.081 [2024-11-20 13:43:16.804752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:14.081 [2024-11-20 13:43:16.804823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:14.081 [2024-11-20 13:43:16.804854] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:14.081 [2024-11-20 13:43:16.804873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:14.081 [2024-11-20 13:43:16.807853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:14.081 [2024-11-20 13:43:16.807919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:14.081 BaseBdev3 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.081 [2024-11-20 13:43:16.812871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:14.081 [2024-11-20 13:43:16.815369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:14.081 [2024-11-20 13:43:16.815487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:14.081 [2024-11-20 13:43:16.815801] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:14.081 [2024-11-20 13:43:16.815832] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:14.081 [2024-11-20 13:43:16.816220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:23:14.081 [2024-11-20 13:43:16.816484] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:14.081 [2024-11-20 13:43:16.816525] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:14.081 [2024-11-20 13:43:16.816822] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.081 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:14.081 "name": "raid_bdev1", 00:23:14.081 "uuid": "37e2b72f-3ea1-49f0-93e5-9292e14b5fa4", 00:23:14.081 "strip_size_kb": 64, 00:23:14.081 "state": "online", 00:23:14.081 "raid_level": "concat", 00:23:14.081 "superblock": true, 00:23:14.081 "num_base_bdevs": 3, 00:23:14.081 "num_base_bdevs_discovered": 3, 00:23:14.081 "num_base_bdevs_operational": 3, 00:23:14.081 "base_bdevs_list": [ 00:23:14.081 { 00:23:14.081 "name": "BaseBdev1", 00:23:14.081 "uuid": "43237ac5-efa8-515e-9048-a8d4f801784a", 00:23:14.081 "is_configured": true, 00:23:14.081 "data_offset": 2048, 00:23:14.081 "data_size": 63488 00:23:14.081 }, 00:23:14.081 { 00:23:14.081 "name": "BaseBdev2", 00:23:14.081 "uuid": "6963cf0b-9c89-597d-9c1c-b73b0ca42854", 00:23:14.081 "is_configured": true, 00:23:14.081 "data_offset": 2048, 00:23:14.082 "data_size": 63488 00:23:14.082 }, 00:23:14.082 { 00:23:14.082 "name": "BaseBdev3", 00:23:14.082 "uuid": "8af32622-80f1-5997-9e5b-37292cebc981", 00:23:14.082 "is_configured": true, 00:23:14.082 "data_offset": 2048, 00:23:14.082 "data_size": 63488 00:23:14.082 } 00:23:14.082 ] 00:23:14.082 }' 00:23:14.082 13:43:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:14.082 13:43:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.648 13:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:23:14.648 13:43:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:14.648 [2024-11-20 13:43:17.506485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:15.583 "name": "raid_bdev1", 00:23:15.583 "uuid": "37e2b72f-3ea1-49f0-93e5-9292e14b5fa4", 00:23:15.583 "strip_size_kb": 64, 00:23:15.583 "state": "online", 00:23:15.583 "raid_level": "concat", 00:23:15.583 "superblock": true, 00:23:15.583 "num_base_bdevs": 3, 00:23:15.583 "num_base_bdevs_discovered": 3, 00:23:15.583 "num_base_bdevs_operational": 3, 00:23:15.583 "base_bdevs_list": [ 00:23:15.583 { 00:23:15.583 "name": "BaseBdev1", 00:23:15.583 "uuid": "43237ac5-efa8-515e-9048-a8d4f801784a", 00:23:15.583 "is_configured": true, 00:23:15.583 "data_offset": 2048, 00:23:15.583 "data_size": 63488 00:23:15.583 }, 00:23:15.583 { 00:23:15.583 "name": "BaseBdev2", 00:23:15.583 "uuid": "6963cf0b-9c89-597d-9c1c-b73b0ca42854", 00:23:15.583 "is_configured": true, 00:23:15.583 "data_offset": 2048, 00:23:15.583 "data_size": 63488 00:23:15.583 }, 00:23:15.583 { 00:23:15.583 "name": "BaseBdev3", 00:23:15.583 "uuid": "8af32622-80f1-5997-9e5b-37292cebc981", 00:23:15.583 "is_configured": true, 00:23:15.583 "data_offset": 2048, 00:23:15.583 "data_size": 63488 00:23:15.583 } 00:23:15.583 ] 00:23:15.583 }' 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:15.583 13:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.150 13:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:16.150 13:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.150 13:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.150 [2024-11-20 13:43:18.893398] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:16.150 [2024-11-20 13:43:18.893440] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:16.150 [2024-11-20 13:43:18.896871] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:16.150 [2024-11-20 13:43:18.896954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:16.150 [2024-11-20 13:43:18.897012] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:16.150 [2024-11-20 13:43:18.897027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:16.150 { 00:23:16.150 "results": [ 00:23:16.150 { 00:23:16.150 "job": "raid_bdev1", 00:23:16.150 "core_mask": "0x1", 00:23:16.150 "workload": "randrw", 00:23:16.150 "percentage": 50, 00:23:16.150 "status": "finished", 00:23:16.150 "queue_depth": 1, 00:23:16.150 "io_size": 131072, 00:23:16.150 "runtime": 1.384564, 00:23:16.150 "iops": 10132.431581349796, 00:23:16.150 "mibps": 1266.5539476687245, 00:23:16.150 "io_failed": 1, 00:23:16.150 "io_timeout": 0, 00:23:16.150 "avg_latency_us": 137.36481772824467, 00:23:16.150 "min_latency_us": 43.75272727272727, 00:23:16.150 "max_latency_us": 1854.370909090909 00:23:16.150 } 00:23:16.150 ], 00:23:16.150 "core_count": 1 00:23:16.150 } 00:23:16.150 13:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.150 13:43:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67336 00:23:16.150 13:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67336 ']' 00:23:16.150 13:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67336 00:23:16.150 13:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:23:16.150 13:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.150 13:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67336 00:23:16.150 13:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:16.150 13:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:16.150 killing process with pid 67336 00:23:16.150 13:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67336' 00:23:16.150 13:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67336 00:23:16.150 13:43:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67336 00:23:16.150 [2024-11-20 13:43:18.930556] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:16.408 [2024-11-20 13:43:19.141527] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:17.816 13:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CPjENj7p4O 00:23:17.816 13:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:23:17.816 13:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:23:17.816 13:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:23:17.816 13:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:23:17.816 13:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:17.817 13:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:23:17.817 13:43:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:23:17.817 00:23:17.817 real 0m4.913s 00:23:17.817 user 0m6.107s 00:23:17.817 sys 0m0.614s 00:23:17.817 13:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:17.817 13:43:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.817 ************************************ 00:23:17.817 END TEST raid_read_error_test 00:23:17.817 ************************************ 00:23:17.817 13:43:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:23:17.817 13:43:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:17.817 13:43:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:17.817 13:43:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:17.817 ************************************ 00:23:17.817 START TEST raid_write_error_test 00:23:17.817 ************************************ 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UExNmS3cRU 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67476 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67476 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67476 ']' 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.817 13:43:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.817 [2024-11-20 13:43:20.524236] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:23:17.817 [2024-11-20 13:43:20.524437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67476 ] 00:23:17.817 [2024-11-20 13:43:20.710402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.075 [2024-11-20 13:43:20.843795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.334 [2024-11-20 13:43:21.055715] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:18.334 [2024-11-20 13:43:21.055798] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.903 BaseBdev1_malloc 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.903 true 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.903 [2024-11-20 13:43:21.578093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:18.903 [2024-11-20 13:43:21.578177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:18.903 [2024-11-20 13:43:21.578212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:18.903 [2024-11-20 13:43:21.578231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:18.903 [2024-11-20 13:43:21.581350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:18.903 [2024-11-20 13:43:21.581407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:18.903 BaseBdev1 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.903 BaseBdev2_malloc 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.903 true 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.903 [2024-11-20 13:43:21.634933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:18.903 [2024-11-20 13:43:21.635001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:18.903 [2024-11-20 13:43:21.635027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:18.903 [2024-11-20 13:43:21.635045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:18.903 [2024-11-20 13:43:21.637872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:18.903 [2024-11-20 13:43:21.637938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:18.903 BaseBdev2 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.903 BaseBdev3_malloc 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.903 true 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.903 [2024-11-20 13:43:21.713769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:18.903 [2024-11-20 13:43:21.713840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:18.903 [2024-11-20 13:43:21.713869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:18.903 [2024-11-20 13:43:21.713887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:18.903 [2024-11-20 13:43:21.716884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:18.903 [2024-11-20 13:43:21.716957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:18.903 BaseBdev3 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.903 [2024-11-20 13:43:21.721974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:18.903 [2024-11-20 13:43:21.724559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:18.903 [2024-11-20 13:43:21.724680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:18.903 [2024-11-20 13:43:21.724991] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:18.903 [2024-11-20 13:43:21.725021] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:18.903 [2024-11-20 13:43:21.725373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:23:18.903 [2024-11-20 13:43:21.725630] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:18.903 [2024-11-20 13:43:21.725664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:18.903 [2024-11-20 13:43:21.725928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.903 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:18.903 "name": "raid_bdev1", 00:23:18.903 "uuid": "4ca3ec54-1d39-451c-8360-559b39a700f3", 00:23:18.903 "strip_size_kb": 64, 00:23:18.903 "state": "online", 00:23:18.903 "raid_level": "concat", 00:23:18.903 "superblock": true, 00:23:18.903 "num_base_bdevs": 3, 00:23:18.903 "num_base_bdevs_discovered": 3, 00:23:18.903 "num_base_bdevs_operational": 3, 00:23:18.903 "base_bdevs_list": [ 00:23:18.904 { 00:23:18.904 "name": "BaseBdev1", 00:23:18.904 "uuid": "26101310-fa1e-5230-8f0a-cff5bdb044bc", 00:23:18.904 "is_configured": true, 00:23:18.904 "data_offset": 2048, 00:23:18.904 "data_size": 63488 00:23:18.904 }, 00:23:18.904 { 00:23:18.904 "name": "BaseBdev2", 00:23:18.904 "uuid": "f7a84572-9805-5208-9b9c-46cf34d6618d", 00:23:18.904 "is_configured": true, 00:23:18.904 "data_offset": 2048, 00:23:18.904 "data_size": 63488 00:23:18.904 }, 00:23:18.904 { 00:23:18.904 "name": "BaseBdev3", 00:23:18.904 "uuid": "dd3eba9a-308c-5a27-9c06-a4913965a365", 00:23:18.904 "is_configured": true, 00:23:18.904 "data_offset": 2048, 00:23:18.904 "data_size": 63488 00:23:18.904 } 00:23:18.904 ] 00:23:18.904 }' 00:23:18.904 13:43:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:18.904 13:43:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.524 13:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:23:19.524 13:43:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:19.524 [2024-11-20 13:43:22.379837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:20.460 "name": "raid_bdev1", 00:23:20.460 "uuid": "4ca3ec54-1d39-451c-8360-559b39a700f3", 00:23:20.460 "strip_size_kb": 64, 00:23:20.460 "state": "online", 00:23:20.460 "raid_level": "concat", 00:23:20.460 "superblock": true, 00:23:20.460 "num_base_bdevs": 3, 00:23:20.460 "num_base_bdevs_discovered": 3, 00:23:20.460 "num_base_bdevs_operational": 3, 00:23:20.460 "base_bdevs_list": [ 00:23:20.460 { 00:23:20.460 "name": "BaseBdev1", 00:23:20.460 "uuid": "26101310-fa1e-5230-8f0a-cff5bdb044bc", 00:23:20.460 "is_configured": true, 00:23:20.460 "data_offset": 2048, 00:23:20.460 "data_size": 63488 00:23:20.460 }, 00:23:20.460 { 00:23:20.460 "name": "BaseBdev2", 00:23:20.460 "uuid": "f7a84572-9805-5208-9b9c-46cf34d6618d", 00:23:20.460 "is_configured": true, 00:23:20.460 "data_offset": 2048, 00:23:20.460 "data_size": 63488 00:23:20.460 }, 00:23:20.460 { 00:23:20.460 "name": "BaseBdev3", 00:23:20.460 "uuid": "dd3eba9a-308c-5a27-9c06-a4913965a365", 00:23:20.460 "is_configured": true, 00:23:20.460 "data_offset": 2048, 00:23:20.460 "data_size": 63488 00:23:20.460 } 00:23:20.460 ] 00:23:20.460 }' 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:20.460 13:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.028 13:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:21.028 13:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.028 13:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.028 [2024-11-20 13:43:23.774285] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:21.028 [2024-11-20 13:43:23.774358] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:21.028 [2024-11-20 13:43:23.777914] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:21.028 [2024-11-20 13:43:23.777982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:21.028 [2024-11-20 13:43:23.778045] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:21.028 [2024-11-20 13:43:23.778065] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:21.028 { 00:23:21.028 "results": [ 00:23:21.028 { 00:23:21.028 "job": "raid_bdev1", 00:23:21.028 "core_mask": "0x1", 00:23:21.028 "workload": "randrw", 00:23:21.028 "percentage": 50, 00:23:21.028 "status": "finished", 00:23:21.028 "queue_depth": 1, 00:23:21.028 "io_size": 131072, 00:23:21.028 "runtime": 1.391846, 00:23:21.028 "iops": 8846.524687357653, 00:23:21.028 "mibps": 1105.8155859197066, 00:23:21.028 "io_failed": 1, 00:23:21.028 "io_timeout": 0, 00:23:21.028 "avg_latency_us": 158.40425709096814, 00:23:21.028 "min_latency_us": 39.79636363636364, 00:23:21.028 "max_latency_us": 1966.08 00:23:21.028 } 00:23:21.028 ], 00:23:21.028 "core_count": 1 00:23:21.028 } 00:23:21.028 13:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.028 13:43:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67476 00:23:21.028 13:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67476 ']' 00:23:21.028 13:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67476 00:23:21.028 13:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:23:21.028 13:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.028 13:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67476 00:23:21.028 13:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:21.028 13:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:21.028 killing process with pid 67476 00:23:21.028 13:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67476' 00:23:21.028 13:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67476 00:23:21.028 [2024-11-20 13:43:23.813342] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:21.028 13:43:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67476 00:23:21.287 [2024-11-20 13:43:24.020252] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:22.223 13:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UExNmS3cRU 00:23:22.223 13:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:23:22.223 13:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:23:22.223 13:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:23:22.223 13:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:23:22.223 13:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:22.223 13:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:23:22.223 13:43:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:23:22.223 00:23:22.223 real 0m4.736s 00:23:22.223 user 0m5.844s 00:23:22.223 sys 0m0.621s 00:23:22.223 13:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:22.223 13:43:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.223 ************************************ 00:23:22.223 END TEST raid_write_error_test 00:23:22.223 ************************************ 00:23:22.482 13:43:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:23:22.482 13:43:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:23:22.482 13:43:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:22.482 13:43:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:22.482 13:43:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:22.482 ************************************ 00:23:22.482 START TEST raid_state_function_test 00:23:22.482 ************************************ 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67625 00:23:22.482 Process raid pid: 67625 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67625' 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67625 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67625 ']' 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.482 13:43:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.482 [2024-11-20 13:43:25.296796] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:23:22.482 [2024-11-20 13:43:25.297008] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.741 [2024-11-20 13:43:25.487051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.741 [2024-11-20 13:43:25.633450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.999 [2024-11-20 13:43:25.840812] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:22.999 [2024-11-20 13:43:25.840876] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:23.566 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.566 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:23:23.566 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:23:23.566 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.566 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.566 [2024-11-20 13:43:26.201533] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:23.566 [2024-11-20 13:43:26.201611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:23.566 [2024-11-20 13:43:26.201628] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:23.566 [2024-11-20 13:43:26.201644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:23.566 [2024-11-20 13:43:26.201654] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:23.566 [2024-11-20 13:43:26.201668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:23.566 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.566 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:23.566 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:23.566 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:23.566 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:23.566 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:23.566 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:23.566 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:23.566 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:23.566 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:23.566 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:23.566 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:23.566 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.566 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.566 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.566 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.566 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:23.566 "name": "Existed_Raid", 00:23:23.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.566 "strip_size_kb": 0, 00:23:23.566 "state": "configuring", 00:23:23.566 "raid_level": "raid1", 00:23:23.566 "superblock": false, 00:23:23.566 "num_base_bdevs": 3, 00:23:23.566 "num_base_bdevs_discovered": 0, 00:23:23.566 "num_base_bdevs_operational": 3, 00:23:23.566 "base_bdevs_list": [ 00:23:23.566 { 00:23:23.566 "name": "BaseBdev1", 00:23:23.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.566 "is_configured": false, 00:23:23.566 "data_offset": 0, 00:23:23.567 "data_size": 0 00:23:23.567 }, 00:23:23.567 { 00:23:23.567 "name": "BaseBdev2", 00:23:23.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.567 "is_configured": false, 00:23:23.567 "data_offset": 0, 00:23:23.567 "data_size": 0 00:23:23.567 }, 00:23:23.567 { 00:23:23.567 "name": "BaseBdev3", 00:23:23.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.567 "is_configured": false, 00:23:23.567 "data_offset": 0, 00:23:23.567 "data_size": 0 00:23:23.567 } 00:23:23.567 ] 00:23:23.567 }' 00:23:23.567 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:23.567 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.825 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:23.825 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.825 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.826 [2024-11-20 13:43:26.737730] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:23.826 [2024-11-20 13:43:26.737776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.085 [2024-11-20 13:43:26.745714] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:24.085 [2024-11-20 13:43:26.745790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:24.085 [2024-11-20 13:43:26.745805] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:24.085 [2024-11-20 13:43:26.745820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:24.085 [2024-11-20 13:43:26.745829] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:24.085 [2024-11-20 13:43:26.745843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.085 [2024-11-20 13:43:26.788860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:24.085 BaseBdev1 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.085 [ 00:23:24.085 { 00:23:24.085 "name": "BaseBdev1", 00:23:24.085 "aliases": [ 00:23:24.085 "0920b326-63d2-4321-871c-868fce4afb82" 00:23:24.085 ], 00:23:24.085 "product_name": "Malloc disk", 00:23:24.085 "block_size": 512, 00:23:24.085 "num_blocks": 65536, 00:23:24.085 "uuid": "0920b326-63d2-4321-871c-868fce4afb82", 00:23:24.085 "assigned_rate_limits": { 00:23:24.085 "rw_ios_per_sec": 0, 00:23:24.085 "rw_mbytes_per_sec": 0, 00:23:24.085 "r_mbytes_per_sec": 0, 00:23:24.085 "w_mbytes_per_sec": 0 00:23:24.085 }, 00:23:24.085 "claimed": true, 00:23:24.085 "claim_type": "exclusive_write", 00:23:24.085 "zoned": false, 00:23:24.085 "supported_io_types": { 00:23:24.085 "read": true, 00:23:24.085 "write": true, 00:23:24.085 "unmap": true, 00:23:24.085 "flush": true, 00:23:24.085 "reset": true, 00:23:24.085 "nvme_admin": false, 00:23:24.085 "nvme_io": false, 00:23:24.085 "nvme_io_md": false, 00:23:24.085 "write_zeroes": true, 00:23:24.085 "zcopy": true, 00:23:24.085 "get_zone_info": false, 00:23:24.085 "zone_management": false, 00:23:24.085 "zone_append": false, 00:23:24.085 "compare": false, 00:23:24.085 "compare_and_write": false, 00:23:24.085 "abort": true, 00:23:24.085 "seek_hole": false, 00:23:24.085 "seek_data": false, 00:23:24.085 "copy": true, 00:23:24.085 "nvme_iov_md": false 00:23:24.085 }, 00:23:24.085 "memory_domains": [ 00:23:24.085 { 00:23:24.085 "dma_device_id": "system", 00:23:24.085 "dma_device_type": 1 00:23:24.085 }, 00:23:24.085 { 00:23:24.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.085 "dma_device_type": 2 00:23:24.085 } 00:23:24.085 ], 00:23:24.085 "driver_specific": {} 00:23:24.085 } 00:23:24.085 ] 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:24.085 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:24.086 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:24.086 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:24.086 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:24.086 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.086 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.086 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.086 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:24.086 "name": "Existed_Raid", 00:23:24.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.086 "strip_size_kb": 0, 00:23:24.086 "state": "configuring", 00:23:24.086 "raid_level": "raid1", 00:23:24.086 "superblock": false, 00:23:24.086 "num_base_bdevs": 3, 00:23:24.086 "num_base_bdevs_discovered": 1, 00:23:24.086 "num_base_bdevs_operational": 3, 00:23:24.086 "base_bdevs_list": [ 00:23:24.086 { 00:23:24.086 "name": "BaseBdev1", 00:23:24.086 "uuid": "0920b326-63d2-4321-871c-868fce4afb82", 00:23:24.086 "is_configured": true, 00:23:24.086 "data_offset": 0, 00:23:24.086 "data_size": 65536 00:23:24.086 }, 00:23:24.086 { 00:23:24.086 "name": "BaseBdev2", 00:23:24.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.086 "is_configured": false, 00:23:24.086 "data_offset": 0, 00:23:24.086 "data_size": 0 00:23:24.086 }, 00:23:24.086 { 00:23:24.086 "name": "BaseBdev3", 00:23:24.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.086 "is_configured": false, 00:23:24.086 "data_offset": 0, 00:23:24.086 "data_size": 0 00:23:24.086 } 00:23:24.086 ] 00:23:24.086 }' 00:23:24.086 13:43:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:24.086 13:43:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.656 [2024-11-20 13:43:27.345104] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:24.656 [2024-11-20 13:43:27.345171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.656 [2024-11-20 13:43:27.353148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:24.656 [2024-11-20 13:43:27.355737] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:24.656 [2024-11-20 13:43:27.355804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:24.656 [2024-11-20 13:43:27.355844] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:24.656 [2024-11-20 13:43:27.355862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:24.656 "name": "Existed_Raid", 00:23:24.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.656 "strip_size_kb": 0, 00:23:24.656 "state": "configuring", 00:23:24.656 "raid_level": "raid1", 00:23:24.656 "superblock": false, 00:23:24.656 "num_base_bdevs": 3, 00:23:24.656 "num_base_bdevs_discovered": 1, 00:23:24.656 "num_base_bdevs_operational": 3, 00:23:24.656 "base_bdevs_list": [ 00:23:24.656 { 00:23:24.656 "name": "BaseBdev1", 00:23:24.656 "uuid": "0920b326-63d2-4321-871c-868fce4afb82", 00:23:24.656 "is_configured": true, 00:23:24.656 "data_offset": 0, 00:23:24.656 "data_size": 65536 00:23:24.656 }, 00:23:24.656 { 00:23:24.656 "name": "BaseBdev2", 00:23:24.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.656 "is_configured": false, 00:23:24.656 "data_offset": 0, 00:23:24.656 "data_size": 0 00:23:24.656 }, 00:23:24.656 { 00:23:24.656 "name": "BaseBdev3", 00:23:24.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.656 "is_configured": false, 00:23:24.656 "data_offset": 0, 00:23:24.656 "data_size": 0 00:23:24.656 } 00:23:24.656 ] 00:23:24.656 }' 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:24.656 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.226 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.227 [2024-11-20 13:43:27.925461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:25.227 BaseBdev2 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.227 [ 00:23:25.227 { 00:23:25.227 "name": "BaseBdev2", 00:23:25.227 "aliases": [ 00:23:25.227 "d24dfe30-94ce-45c3-9e56-4e64b0b8a08d" 00:23:25.227 ], 00:23:25.227 "product_name": "Malloc disk", 00:23:25.227 "block_size": 512, 00:23:25.227 "num_blocks": 65536, 00:23:25.227 "uuid": "d24dfe30-94ce-45c3-9e56-4e64b0b8a08d", 00:23:25.227 "assigned_rate_limits": { 00:23:25.227 "rw_ios_per_sec": 0, 00:23:25.227 "rw_mbytes_per_sec": 0, 00:23:25.227 "r_mbytes_per_sec": 0, 00:23:25.227 "w_mbytes_per_sec": 0 00:23:25.227 }, 00:23:25.227 "claimed": true, 00:23:25.227 "claim_type": "exclusive_write", 00:23:25.227 "zoned": false, 00:23:25.227 "supported_io_types": { 00:23:25.227 "read": true, 00:23:25.227 "write": true, 00:23:25.227 "unmap": true, 00:23:25.227 "flush": true, 00:23:25.227 "reset": true, 00:23:25.227 "nvme_admin": false, 00:23:25.227 "nvme_io": false, 00:23:25.227 "nvme_io_md": false, 00:23:25.227 "write_zeroes": true, 00:23:25.227 "zcopy": true, 00:23:25.227 "get_zone_info": false, 00:23:25.227 "zone_management": false, 00:23:25.227 "zone_append": false, 00:23:25.227 "compare": false, 00:23:25.227 "compare_and_write": false, 00:23:25.227 "abort": true, 00:23:25.227 "seek_hole": false, 00:23:25.227 "seek_data": false, 00:23:25.227 "copy": true, 00:23:25.227 "nvme_iov_md": false 00:23:25.227 }, 00:23:25.227 "memory_domains": [ 00:23:25.227 { 00:23:25.227 "dma_device_id": "system", 00:23:25.227 "dma_device_type": 1 00:23:25.227 }, 00:23:25.227 { 00:23:25.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:25.227 "dma_device_type": 2 00:23:25.227 } 00:23:25.227 ], 00:23:25.227 "driver_specific": {} 00:23:25.227 } 00:23:25.227 ] 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.227 13:43:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.227 13:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:25.227 "name": "Existed_Raid", 00:23:25.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.227 "strip_size_kb": 0, 00:23:25.227 "state": "configuring", 00:23:25.227 "raid_level": "raid1", 00:23:25.227 "superblock": false, 00:23:25.227 "num_base_bdevs": 3, 00:23:25.227 "num_base_bdevs_discovered": 2, 00:23:25.227 "num_base_bdevs_operational": 3, 00:23:25.227 "base_bdevs_list": [ 00:23:25.227 { 00:23:25.227 "name": "BaseBdev1", 00:23:25.227 "uuid": "0920b326-63d2-4321-871c-868fce4afb82", 00:23:25.227 "is_configured": true, 00:23:25.227 "data_offset": 0, 00:23:25.227 "data_size": 65536 00:23:25.227 }, 00:23:25.227 { 00:23:25.227 "name": "BaseBdev2", 00:23:25.227 "uuid": "d24dfe30-94ce-45c3-9e56-4e64b0b8a08d", 00:23:25.227 "is_configured": true, 00:23:25.227 "data_offset": 0, 00:23:25.227 "data_size": 65536 00:23:25.227 }, 00:23:25.227 { 00:23:25.227 "name": "BaseBdev3", 00:23:25.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.227 "is_configured": false, 00:23:25.227 "data_offset": 0, 00:23:25.227 "data_size": 0 00:23:25.227 } 00:23:25.227 ] 00:23:25.227 }' 00:23:25.227 13:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:25.227 13:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.795 13:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.796 [2024-11-20 13:43:28.512832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:25.796 [2024-11-20 13:43:28.512943] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:25.796 [2024-11-20 13:43:28.512965] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:25.796 [2024-11-20 13:43:28.513316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:25.796 [2024-11-20 13:43:28.513563] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:25.796 [2024-11-20 13:43:28.513588] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:25.796 [2024-11-20 13:43:28.513908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:25.796 BaseBdev3 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.796 [ 00:23:25.796 { 00:23:25.796 "name": "BaseBdev3", 00:23:25.796 "aliases": [ 00:23:25.796 "bdaa6321-1c1b-43a3-bfc3-711318559760" 00:23:25.796 ], 00:23:25.796 "product_name": "Malloc disk", 00:23:25.796 "block_size": 512, 00:23:25.796 "num_blocks": 65536, 00:23:25.796 "uuid": "bdaa6321-1c1b-43a3-bfc3-711318559760", 00:23:25.796 "assigned_rate_limits": { 00:23:25.796 "rw_ios_per_sec": 0, 00:23:25.796 "rw_mbytes_per_sec": 0, 00:23:25.796 "r_mbytes_per_sec": 0, 00:23:25.796 "w_mbytes_per_sec": 0 00:23:25.796 }, 00:23:25.796 "claimed": true, 00:23:25.796 "claim_type": "exclusive_write", 00:23:25.796 "zoned": false, 00:23:25.796 "supported_io_types": { 00:23:25.796 "read": true, 00:23:25.796 "write": true, 00:23:25.796 "unmap": true, 00:23:25.796 "flush": true, 00:23:25.796 "reset": true, 00:23:25.796 "nvme_admin": false, 00:23:25.796 "nvme_io": false, 00:23:25.796 "nvme_io_md": false, 00:23:25.796 "write_zeroes": true, 00:23:25.796 "zcopy": true, 00:23:25.796 "get_zone_info": false, 00:23:25.796 "zone_management": false, 00:23:25.796 "zone_append": false, 00:23:25.796 "compare": false, 00:23:25.796 "compare_and_write": false, 00:23:25.796 "abort": true, 00:23:25.796 "seek_hole": false, 00:23:25.796 "seek_data": false, 00:23:25.796 "copy": true, 00:23:25.796 "nvme_iov_md": false 00:23:25.796 }, 00:23:25.796 "memory_domains": [ 00:23:25.796 { 00:23:25.796 "dma_device_id": "system", 00:23:25.796 "dma_device_type": 1 00:23:25.796 }, 00:23:25.796 { 00:23:25.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:25.796 "dma_device_type": 2 00:23:25.796 } 00:23:25.796 ], 00:23:25.796 "driver_specific": {} 00:23:25.796 } 00:23:25.796 ] 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:25.796 "name": "Existed_Raid", 00:23:25.796 "uuid": "03488987-a88b-44f5-bf40-dd2b99bc869e", 00:23:25.796 "strip_size_kb": 0, 00:23:25.796 "state": "online", 00:23:25.796 "raid_level": "raid1", 00:23:25.796 "superblock": false, 00:23:25.796 "num_base_bdevs": 3, 00:23:25.796 "num_base_bdevs_discovered": 3, 00:23:25.796 "num_base_bdevs_operational": 3, 00:23:25.796 "base_bdevs_list": [ 00:23:25.796 { 00:23:25.796 "name": "BaseBdev1", 00:23:25.796 "uuid": "0920b326-63d2-4321-871c-868fce4afb82", 00:23:25.796 "is_configured": true, 00:23:25.796 "data_offset": 0, 00:23:25.796 "data_size": 65536 00:23:25.796 }, 00:23:25.796 { 00:23:25.796 "name": "BaseBdev2", 00:23:25.796 "uuid": "d24dfe30-94ce-45c3-9e56-4e64b0b8a08d", 00:23:25.796 "is_configured": true, 00:23:25.796 "data_offset": 0, 00:23:25.796 "data_size": 65536 00:23:25.796 }, 00:23:25.796 { 00:23:25.796 "name": "BaseBdev3", 00:23:25.796 "uuid": "bdaa6321-1c1b-43a3-bfc3-711318559760", 00:23:25.796 "is_configured": true, 00:23:25.796 "data_offset": 0, 00:23:25.796 "data_size": 65536 00:23:25.796 } 00:23:25.796 ] 00:23:25.796 }' 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:25.796 13:43:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.364 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:26.364 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:26.364 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:26.364 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:26.364 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:26.364 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:26.364 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:26.364 13:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.364 13:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.364 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:26.364 [2024-11-20 13:43:29.069437] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:26.364 13:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.364 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:26.364 "name": "Existed_Raid", 00:23:26.364 "aliases": [ 00:23:26.364 "03488987-a88b-44f5-bf40-dd2b99bc869e" 00:23:26.364 ], 00:23:26.364 "product_name": "Raid Volume", 00:23:26.364 "block_size": 512, 00:23:26.364 "num_blocks": 65536, 00:23:26.364 "uuid": "03488987-a88b-44f5-bf40-dd2b99bc869e", 00:23:26.364 "assigned_rate_limits": { 00:23:26.364 "rw_ios_per_sec": 0, 00:23:26.364 "rw_mbytes_per_sec": 0, 00:23:26.364 "r_mbytes_per_sec": 0, 00:23:26.364 "w_mbytes_per_sec": 0 00:23:26.364 }, 00:23:26.364 "claimed": false, 00:23:26.364 "zoned": false, 00:23:26.364 "supported_io_types": { 00:23:26.364 "read": true, 00:23:26.364 "write": true, 00:23:26.364 "unmap": false, 00:23:26.364 "flush": false, 00:23:26.364 "reset": true, 00:23:26.364 "nvme_admin": false, 00:23:26.364 "nvme_io": false, 00:23:26.364 "nvme_io_md": false, 00:23:26.364 "write_zeroes": true, 00:23:26.364 "zcopy": false, 00:23:26.364 "get_zone_info": false, 00:23:26.364 "zone_management": false, 00:23:26.364 "zone_append": false, 00:23:26.364 "compare": false, 00:23:26.364 "compare_and_write": false, 00:23:26.364 "abort": false, 00:23:26.364 "seek_hole": false, 00:23:26.364 "seek_data": false, 00:23:26.364 "copy": false, 00:23:26.364 "nvme_iov_md": false 00:23:26.364 }, 00:23:26.364 "memory_domains": [ 00:23:26.364 { 00:23:26.364 "dma_device_id": "system", 00:23:26.364 "dma_device_type": 1 00:23:26.364 }, 00:23:26.364 { 00:23:26.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:26.364 "dma_device_type": 2 00:23:26.364 }, 00:23:26.364 { 00:23:26.364 "dma_device_id": "system", 00:23:26.364 "dma_device_type": 1 00:23:26.364 }, 00:23:26.364 { 00:23:26.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:26.364 "dma_device_type": 2 00:23:26.364 }, 00:23:26.364 { 00:23:26.364 "dma_device_id": "system", 00:23:26.364 "dma_device_type": 1 00:23:26.364 }, 00:23:26.364 { 00:23:26.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:26.364 "dma_device_type": 2 00:23:26.364 } 00:23:26.364 ], 00:23:26.364 "driver_specific": { 00:23:26.364 "raid": { 00:23:26.364 "uuid": "03488987-a88b-44f5-bf40-dd2b99bc869e", 00:23:26.364 "strip_size_kb": 0, 00:23:26.364 "state": "online", 00:23:26.364 "raid_level": "raid1", 00:23:26.364 "superblock": false, 00:23:26.364 "num_base_bdevs": 3, 00:23:26.364 "num_base_bdevs_discovered": 3, 00:23:26.364 "num_base_bdevs_operational": 3, 00:23:26.364 "base_bdevs_list": [ 00:23:26.364 { 00:23:26.364 "name": "BaseBdev1", 00:23:26.364 "uuid": "0920b326-63d2-4321-871c-868fce4afb82", 00:23:26.364 "is_configured": true, 00:23:26.364 "data_offset": 0, 00:23:26.364 "data_size": 65536 00:23:26.364 }, 00:23:26.364 { 00:23:26.364 "name": "BaseBdev2", 00:23:26.364 "uuid": "d24dfe30-94ce-45c3-9e56-4e64b0b8a08d", 00:23:26.364 "is_configured": true, 00:23:26.364 "data_offset": 0, 00:23:26.364 "data_size": 65536 00:23:26.364 }, 00:23:26.364 { 00:23:26.364 "name": "BaseBdev3", 00:23:26.364 "uuid": "bdaa6321-1c1b-43a3-bfc3-711318559760", 00:23:26.364 "is_configured": true, 00:23:26.364 "data_offset": 0, 00:23:26.364 "data_size": 65536 00:23:26.364 } 00:23:26.364 ] 00:23:26.364 } 00:23:26.364 } 00:23:26.364 }' 00:23:26.364 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:26.364 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:26.364 BaseBdev2 00:23:26.364 BaseBdev3' 00:23:26.364 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:26.364 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:26.364 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:26.364 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:26.364 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:26.365 13:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.365 13:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.365 13:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.365 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:26.365 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:26.365 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:26.365 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:26.365 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:26.365 13:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.365 13:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.623 [2024-11-20 13:43:29.381205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:26.623 13:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.624 13:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:26.624 13:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.624 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:26.624 "name": "Existed_Raid", 00:23:26.624 "uuid": "03488987-a88b-44f5-bf40-dd2b99bc869e", 00:23:26.624 "strip_size_kb": 0, 00:23:26.624 "state": "online", 00:23:26.624 "raid_level": "raid1", 00:23:26.624 "superblock": false, 00:23:26.624 "num_base_bdevs": 3, 00:23:26.624 "num_base_bdevs_discovered": 2, 00:23:26.624 "num_base_bdevs_operational": 2, 00:23:26.624 "base_bdevs_list": [ 00:23:26.624 { 00:23:26.624 "name": null, 00:23:26.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.624 "is_configured": false, 00:23:26.624 "data_offset": 0, 00:23:26.624 "data_size": 65536 00:23:26.624 }, 00:23:26.624 { 00:23:26.624 "name": "BaseBdev2", 00:23:26.624 "uuid": "d24dfe30-94ce-45c3-9e56-4e64b0b8a08d", 00:23:26.624 "is_configured": true, 00:23:26.624 "data_offset": 0, 00:23:26.624 "data_size": 65536 00:23:26.624 }, 00:23:26.624 { 00:23:26.624 "name": "BaseBdev3", 00:23:26.624 "uuid": "bdaa6321-1c1b-43a3-bfc3-711318559760", 00:23:26.624 "is_configured": true, 00:23:26.624 "data_offset": 0, 00:23:26.624 "data_size": 65536 00:23:26.624 } 00:23:26.624 ] 00:23:26.624 }' 00:23:26.624 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:26.624 13:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.222 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:27.222 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:27.222 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:27.222 13:43:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.222 13:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.222 13:43:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.222 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.222 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:27.222 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:27.222 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:27.222 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.222 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.222 [2024-11-20 13:43:30.046762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.481 [2024-11-20 13:43:30.194016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:27.481 [2024-11-20 13:43:30.194140] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:27.481 [2024-11-20 13:43:30.282922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:27.481 [2024-11-20 13:43:30.283139] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:27.481 [2024-11-20 13:43:30.283298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.481 BaseBdev2 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.481 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.740 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.740 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:27.740 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.740 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.740 [ 00:23:27.740 { 00:23:27.740 "name": "BaseBdev2", 00:23:27.740 "aliases": [ 00:23:27.740 "8371042d-cb5a-43e7-bc53-219302c70fe1" 00:23:27.740 ], 00:23:27.740 "product_name": "Malloc disk", 00:23:27.740 "block_size": 512, 00:23:27.740 "num_blocks": 65536, 00:23:27.740 "uuid": "8371042d-cb5a-43e7-bc53-219302c70fe1", 00:23:27.740 "assigned_rate_limits": { 00:23:27.740 "rw_ios_per_sec": 0, 00:23:27.740 "rw_mbytes_per_sec": 0, 00:23:27.740 "r_mbytes_per_sec": 0, 00:23:27.740 "w_mbytes_per_sec": 0 00:23:27.740 }, 00:23:27.740 "claimed": false, 00:23:27.740 "zoned": false, 00:23:27.740 "supported_io_types": { 00:23:27.740 "read": true, 00:23:27.740 "write": true, 00:23:27.740 "unmap": true, 00:23:27.740 "flush": true, 00:23:27.740 "reset": true, 00:23:27.740 "nvme_admin": false, 00:23:27.740 "nvme_io": false, 00:23:27.740 "nvme_io_md": false, 00:23:27.740 "write_zeroes": true, 00:23:27.740 "zcopy": true, 00:23:27.740 "get_zone_info": false, 00:23:27.740 "zone_management": false, 00:23:27.740 "zone_append": false, 00:23:27.740 "compare": false, 00:23:27.740 "compare_and_write": false, 00:23:27.740 "abort": true, 00:23:27.740 "seek_hole": false, 00:23:27.740 "seek_data": false, 00:23:27.740 "copy": true, 00:23:27.740 "nvme_iov_md": false 00:23:27.740 }, 00:23:27.740 "memory_domains": [ 00:23:27.740 { 00:23:27.740 "dma_device_id": "system", 00:23:27.740 "dma_device_type": 1 00:23:27.740 }, 00:23:27.740 { 00:23:27.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.740 "dma_device_type": 2 00:23:27.740 } 00:23:27.740 ], 00:23:27.740 "driver_specific": {} 00:23:27.740 } 00:23:27.741 ] 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.741 BaseBdev3 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.741 [ 00:23:27.741 { 00:23:27.741 "name": "BaseBdev3", 00:23:27.741 "aliases": [ 00:23:27.741 "9a502df5-3873-4e1e-9343-49cc6bef5a6b" 00:23:27.741 ], 00:23:27.741 "product_name": "Malloc disk", 00:23:27.741 "block_size": 512, 00:23:27.741 "num_blocks": 65536, 00:23:27.741 "uuid": "9a502df5-3873-4e1e-9343-49cc6bef5a6b", 00:23:27.741 "assigned_rate_limits": { 00:23:27.741 "rw_ios_per_sec": 0, 00:23:27.741 "rw_mbytes_per_sec": 0, 00:23:27.741 "r_mbytes_per_sec": 0, 00:23:27.741 "w_mbytes_per_sec": 0 00:23:27.741 }, 00:23:27.741 "claimed": false, 00:23:27.741 "zoned": false, 00:23:27.741 "supported_io_types": { 00:23:27.741 "read": true, 00:23:27.741 "write": true, 00:23:27.741 "unmap": true, 00:23:27.741 "flush": true, 00:23:27.741 "reset": true, 00:23:27.741 "nvme_admin": false, 00:23:27.741 "nvme_io": false, 00:23:27.741 "nvme_io_md": false, 00:23:27.741 "write_zeroes": true, 00:23:27.741 "zcopy": true, 00:23:27.741 "get_zone_info": false, 00:23:27.741 "zone_management": false, 00:23:27.741 "zone_append": false, 00:23:27.741 "compare": false, 00:23:27.741 "compare_and_write": false, 00:23:27.741 "abort": true, 00:23:27.741 "seek_hole": false, 00:23:27.741 "seek_data": false, 00:23:27.741 "copy": true, 00:23:27.741 "nvme_iov_md": false 00:23:27.741 }, 00:23:27.741 "memory_domains": [ 00:23:27.741 { 00:23:27.741 "dma_device_id": "system", 00:23:27.741 "dma_device_type": 1 00:23:27.741 }, 00:23:27.741 { 00:23:27.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.741 "dma_device_type": 2 00:23:27.741 } 00:23:27.741 ], 00:23:27.741 "driver_specific": {} 00:23:27.741 } 00:23:27.741 ] 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.741 [2024-11-20 13:43:30.511012] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:27.741 [2024-11-20 13:43:30.511070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:27.741 [2024-11-20 13:43:30.511097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:27.741 [2024-11-20 13:43:30.513689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:27.741 "name": "Existed_Raid", 00:23:27.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.741 "strip_size_kb": 0, 00:23:27.741 "state": "configuring", 00:23:27.741 "raid_level": "raid1", 00:23:27.741 "superblock": false, 00:23:27.741 "num_base_bdevs": 3, 00:23:27.741 "num_base_bdevs_discovered": 2, 00:23:27.741 "num_base_bdevs_operational": 3, 00:23:27.741 "base_bdevs_list": [ 00:23:27.741 { 00:23:27.741 "name": "BaseBdev1", 00:23:27.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.741 "is_configured": false, 00:23:27.741 "data_offset": 0, 00:23:27.741 "data_size": 0 00:23:27.741 }, 00:23:27.741 { 00:23:27.741 "name": "BaseBdev2", 00:23:27.741 "uuid": "8371042d-cb5a-43e7-bc53-219302c70fe1", 00:23:27.741 "is_configured": true, 00:23:27.741 "data_offset": 0, 00:23:27.741 "data_size": 65536 00:23:27.741 }, 00:23:27.741 { 00:23:27.741 "name": "BaseBdev3", 00:23:27.741 "uuid": "9a502df5-3873-4e1e-9343-49cc6bef5a6b", 00:23:27.741 "is_configured": true, 00:23:27.741 "data_offset": 0, 00:23:27.741 "data_size": 65536 00:23:27.741 } 00:23:27.741 ] 00:23:27.741 }' 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:27.741 13:43:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.309 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:23:28.309 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.309 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.309 [2024-11-20 13:43:31.043226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:28.309 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.309 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:28.309 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:28.309 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:28.309 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:28.309 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:28.309 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:28.309 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:28.309 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:28.309 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:28.309 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:28.309 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.309 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:28.309 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.309 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.309 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.309 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:28.309 "name": "Existed_Raid", 00:23:28.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.309 "strip_size_kb": 0, 00:23:28.309 "state": "configuring", 00:23:28.309 "raid_level": "raid1", 00:23:28.309 "superblock": false, 00:23:28.309 "num_base_bdevs": 3, 00:23:28.309 "num_base_bdevs_discovered": 1, 00:23:28.309 "num_base_bdevs_operational": 3, 00:23:28.309 "base_bdevs_list": [ 00:23:28.309 { 00:23:28.309 "name": "BaseBdev1", 00:23:28.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.309 "is_configured": false, 00:23:28.309 "data_offset": 0, 00:23:28.309 "data_size": 0 00:23:28.309 }, 00:23:28.309 { 00:23:28.309 "name": null, 00:23:28.309 "uuid": "8371042d-cb5a-43e7-bc53-219302c70fe1", 00:23:28.309 "is_configured": false, 00:23:28.309 "data_offset": 0, 00:23:28.309 "data_size": 65536 00:23:28.309 }, 00:23:28.309 { 00:23:28.309 "name": "BaseBdev3", 00:23:28.309 "uuid": "9a502df5-3873-4e1e-9343-49cc6bef5a6b", 00:23:28.309 "is_configured": true, 00:23:28.309 "data_offset": 0, 00:23:28.309 "data_size": 65536 00:23:28.309 } 00:23:28.309 ] 00:23:28.309 }' 00:23:28.309 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:28.309 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.878 [2024-11-20 13:43:31.670512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:28.878 BaseBdev1 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.878 [ 00:23:28.878 { 00:23:28.878 "name": "BaseBdev1", 00:23:28.878 "aliases": [ 00:23:28.878 "d6da4d33-bec8-479e-a9e7-2a00ff058b71" 00:23:28.878 ], 00:23:28.878 "product_name": "Malloc disk", 00:23:28.878 "block_size": 512, 00:23:28.878 "num_blocks": 65536, 00:23:28.878 "uuid": "d6da4d33-bec8-479e-a9e7-2a00ff058b71", 00:23:28.878 "assigned_rate_limits": { 00:23:28.878 "rw_ios_per_sec": 0, 00:23:28.878 "rw_mbytes_per_sec": 0, 00:23:28.878 "r_mbytes_per_sec": 0, 00:23:28.878 "w_mbytes_per_sec": 0 00:23:28.878 }, 00:23:28.878 "claimed": true, 00:23:28.878 "claim_type": "exclusive_write", 00:23:28.878 "zoned": false, 00:23:28.878 "supported_io_types": { 00:23:28.878 "read": true, 00:23:28.878 "write": true, 00:23:28.878 "unmap": true, 00:23:28.878 "flush": true, 00:23:28.878 "reset": true, 00:23:28.878 "nvme_admin": false, 00:23:28.878 "nvme_io": false, 00:23:28.878 "nvme_io_md": false, 00:23:28.878 "write_zeroes": true, 00:23:28.878 "zcopy": true, 00:23:28.878 "get_zone_info": false, 00:23:28.878 "zone_management": false, 00:23:28.878 "zone_append": false, 00:23:28.878 "compare": false, 00:23:28.878 "compare_and_write": false, 00:23:28.878 "abort": true, 00:23:28.878 "seek_hole": false, 00:23:28.878 "seek_data": false, 00:23:28.878 "copy": true, 00:23:28.878 "nvme_iov_md": false 00:23:28.878 }, 00:23:28.878 "memory_domains": [ 00:23:28.878 { 00:23:28.878 "dma_device_id": "system", 00:23:28.878 "dma_device_type": 1 00:23:28.878 }, 00:23:28.878 { 00:23:28.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.878 "dma_device_type": 2 00:23:28.878 } 00:23:28.878 ], 00:23:28.878 "driver_specific": {} 00:23:28.878 } 00:23:28.878 ] 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:28.878 "name": "Existed_Raid", 00:23:28.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.878 "strip_size_kb": 0, 00:23:28.878 "state": "configuring", 00:23:28.878 "raid_level": "raid1", 00:23:28.878 "superblock": false, 00:23:28.878 "num_base_bdevs": 3, 00:23:28.878 "num_base_bdevs_discovered": 2, 00:23:28.878 "num_base_bdevs_operational": 3, 00:23:28.878 "base_bdevs_list": [ 00:23:28.878 { 00:23:28.878 "name": "BaseBdev1", 00:23:28.878 "uuid": "d6da4d33-bec8-479e-a9e7-2a00ff058b71", 00:23:28.878 "is_configured": true, 00:23:28.878 "data_offset": 0, 00:23:28.878 "data_size": 65536 00:23:28.878 }, 00:23:28.878 { 00:23:28.878 "name": null, 00:23:28.878 "uuid": "8371042d-cb5a-43e7-bc53-219302c70fe1", 00:23:28.878 "is_configured": false, 00:23:28.878 "data_offset": 0, 00:23:28.878 "data_size": 65536 00:23:28.878 }, 00:23:28.878 { 00:23:28.878 "name": "BaseBdev3", 00:23:28.878 "uuid": "9a502df5-3873-4e1e-9343-49cc6bef5a6b", 00:23:28.878 "is_configured": true, 00:23:28.878 "data_offset": 0, 00:23:28.878 "data_size": 65536 00:23:28.878 } 00:23:28.878 ] 00:23:28.878 }' 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:28.878 13:43:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.446 [2024-11-20 13:43:32.314782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.446 13:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.705 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:29.705 "name": "Existed_Raid", 00:23:29.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.705 "strip_size_kb": 0, 00:23:29.705 "state": "configuring", 00:23:29.705 "raid_level": "raid1", 00:23:29.705 "superblock": false, 00:23:29.705 "num_base_bdevs": 3, 00:23:29.705 "num_base_bdevs_discovered": 1, 00:23:29.705 "num_base_bdevs_operational": 3, 00:23:29.705 "base_bdevs_list": [ 00:23:29.705 { 00:23:29.705 "name": "BaseBdev1", 00:23:29.705 "uuid": "d6da4d33-bec8-479e-a9e7-2a00ff058b71", 00:23:29.705 "is_configured": true, 00:23:29.705 "data_offset": 0, 00:23:29.705 "data_size": 65536 00:23:29.705 }, 00:23:29.705 { 00:23:29.705 "name": null, 00:23:29.705 "uuid": "8371042d-cb5a-43e7-bc53-219302c70fe1", 00:23:29.705 "is_configured": false, 00:23:29.705 "data_offset": 0, 00:23:29.705 "data_size": 65536 00:23:29.705 }, 00:23:29.705 { 00:23:29.705 "name": null, 00:23:29.705 "uuid": "9a502df5-3873-4e1e-9343-49cc6bef5a6b", 00:23:29.705 "is_configured": false, 00:23:29.705 "data_offset": 0, 00:23:29.705 "data_size": 65536 00:23:29.705 } 00:23:29.705 ] 00:23:29.705 }' 00:23:29.705 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:29.705 13:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.964 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.964 13:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.964 13:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.964 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:29.964 13:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.964 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:23:29.964 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:29.964 13:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.964 13:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.223 [2024-11-20 13:43:32.878982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:30.223 13:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.223 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:30.223 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:30.223 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:30.223 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:30.223 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:30.223 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:30.223 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:30.223 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:30.223 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:30.223 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:30.223 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.223 13:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.223 13:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.223 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:30.223 13:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.223 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:30.223 "name": "Existed_Raid", 00:23:30.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.223 "strip_size_kb": 0, 00:23:30.223 "state": "configuring", 00:23:30.223 "raid_level": "raid1", 00:23:30.223 "superblock": false, 00:23:30.223 "num_base_bdevs": 3, 00:23:30.223 "num_base_bdevs_discovered": 2, 00:23:30.223 "num_base_bdevs_operational": 3, 00:23:30.223 "base_bdevs_list": [ 00:23:30.223 { 00:23:30.223 "name": "BaseBdev1", 00:23:30.223 "uuid": "d6da4d33-bec8-479e-a9e7-2a00ff058b71", 00:23:30.223 "is_configured": true, 00:23:30.223 "data_offset": 0, 00:23:30.223 "data_size": 65536 00:23:30.223 }, 00:23:30.223 { 00:23:30.223 "name": null, 00:23:30.223 "uuid": "8371042d-cb5a-43e7-bc53-219302c70fe1", 00:23:30.223 "is_configured": false, 00:23:30.223 "data_offset": 0, 00:23:30.223 "data_size": 65536 00:23:30.223 }, 00:23:30.223 { 00:23:30.223 "name": "BaseBdev3", 00:23:30.223 "uuid": "9a502df5-3873-4e1e-9343-49cc6bef5a6b", 00:23:30.223 "is_configured": true, 00:23:30.223 "data_offset": 0, 00:23:30.223 "data_size": 65536 00:23:30.223 } 00:23:30.223 ] 00:23:30.223 }' 00:23:30.223 13:43:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:30.223 13:43:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.790 [2024-11-20 13:43:33.492225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:30.790 "name": "Existed_Raid", 00:23:30.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.790 "strip_size_kb": 0, 00:23:30.790 "state": "configuring", 00:23:30.790 "raid_level": "raid1", 00:23:30.790 "superblock": false, 00:23:30.790 "num_base_bdevs": 3, 00:23:30.790 "num_base_bdevs_discovered": 1, 00:23:30.790 "num_base_bdevs_operational": 3, 00:23:30.790 "base_bdevs_list": [ 00:23:30.790 { 00:23:30.790 "name": null, 00:23:30.790 "uuid": "d6da4d33-bec8-479e-a9e7-2a00ff058b71", 00:23:30.790 "is_configured": false, 00:23:30.790 "data_offset": 0, 00:23:30.790 "data_size": 65536 00:23:30.790 }, 00:23:30.790 { 00:23:30.790 "name": null, 00:23:30.790 "uuid": "8371042d-cb5a-43e7-bc53-219302c70fe1", 00:23:30.790 "is_configured": false, 00:23:30.790 "data_offset": 0, 00:23:30.790 "data_size": 65536 00:23:30.790 }, 00:23:30.790 { 00:23:30.790 "name": "BaseBdev3", 00:23:30.790 "uuid": "9a502df5-3873-4e1e-9343-49cc6bef5a6b", 00:23:30.790 "is_configured": true, 00:23:30.790 "data_offset": 0, 00:23:30.790 "data_size": 65536 00:23:30.790 } 00:23:30.790 ] 00:23:30.790 }' 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:30.790 13:43:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.357 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.357 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.357 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.357 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:31.357 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.357 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:23:31.357 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:31.357 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.357 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.357 [2024-11-20 13:43:34.136143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:31.357 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.357 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:31.357 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:31.357 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:31.357 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:31.358 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:31.358 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:31.358 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:31.358 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:31.358 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:31.358 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:31.358 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:31.358 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.358 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.358 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.358 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.358 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:31.358 "name": "Existed_Raid", 00:23:31.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.358 "strip_size_kb": 0, 00:23:31.358 "state": "configuring", 00:23:31.358 "raid_level": "raid1", 00:23:31.358 "superblock": false, 00:23:31.358 "num_base_bdevs": 3, 00:23:31.358 "num_base_bdevs_discovered": 2, 00:23:31.358 "num_base_bdevs_operational": 3, 00:23:31.358 "base_bdevs_list": [ 00:23:31.358 { 00:23:31.358 "name": null, 00:23:31.358 "uuid": "d6da4d33-bec8-479e-a9e7-2a00ff058b71", 00:23:31.358 "is_configured": false, 00:23:31.358 "data_offset": 0, 00:23:31.358 "data_size": 65536 00:23:31.358 }, 00:23:31.358 { 00:23:31.358 "name": "BaseBdev2", 00:23:31.358 "uuid": "8371042d-cb5a-43e7-bc53-219302c70fe1", 00:23:31.358 "is_configured": true, 00:23:31.358 "data_offset": 0, 00:23:31.358 "data_size": 65536 00:23:31.358 }, 00:23:31.358 { 00:23:31.358 "name": "BaseBdev3", 00:23:31.358 "uuid": "9a502df5-3873-4e1e-9343-49cc6bef5a6b", 00:23:31.358 "is_configured": true, 00:23:31.358 "data_offset": 0, 00:23:31.358 "data_size": 65536 00:23:31.358 } 00:23:31.358 ] 00:23:31.358 }' 00:23:31.358 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:31.358 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d6da4d33-bec8-479e-a9e7-2a00ff058b71 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.926 [2024-11-20 13:43:34.763050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:31.926 [2024-11-20 13:43:34.763272] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:31.926 [2024-11-20 13:43:34.763295] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:31.926 [2024-11-20 13:43:34.763616] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:31.926 [2024-11-20 13:43:34.763812] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:31.926 [2024-11-20 13:43:34.763834] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:23:31.926 [2024-11-20 13:43:34.764133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:31.926 NewBaseBdev 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.926 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.926 [ 00:23:31.926 { 00:23:31.926 "name": "NewBaseBdev", 00:23:31.926 "aliases": [ 00:23:31.926 "d6da4d33-bec8-479e-a9e7-2a00ff058b71" 00:23:31.926 ], 00:23:31.926 "product_name": "Malloc disk", 00:23:31.926 "block_size": 512, 00:23:31.926 "num_blocks": 65536, 00:23:31.926 "uuid": "d6da4d33-bec8-479e-a9e7-2a00ff058b71", 00:23:31.926 "assigned_rate_limits": { 00:23:31.926 "rw_ios_per_sec": 0, 00:23:31.926 "rw_mbytes_per_sec": 0, 00:23:31.926 "r_mbytes_per_sec": 0, 00:23:31.926 "w_mbytes_per_sec": 0 00:23:31.926 }, 00:23:31.926 "claimed": true, 00:23:31.927 "claim_type": "exclusive_write", 00:23:31.927 "zoned": false, 00:23:31.927 "supported_io_types": { 00:23:31.927 "read": true, 00:23:31.927 "write": true, 00:23:31.927 "unmap": true, 00:23:31.927 "flush": true, 00:23:31.927 "reset": true, 00:23:31.927 "nvme_admin": false, 00:23:31.927 "nvme_io": false, 00:23:31.927 "nvme_io_md": false, 00:23:31.927 "write_zeroes": true, 00:23:31.927 "zcopy": true, 00:23:31.927 "get_zone_info": false, 00:23:31.927 "zone_management": false, 00:23:31.927 "zone_append": false, 00:23:31.927 "compare": false, 00:23:31.927 "compare_and_write": false, 00:23:31.927 "abort": true, 00:23:31.927 "seek_hole": false, 00:23:31.927 "seek_data": false, 00:23:31.927 "copy": true, 00:23:31.927 "nvme_iov_md": false 00:23:31.927 }, 00:23:31.927 "memory_domains": [ 00:23:31.927 { 00:23:31.927 "dma_device_id": "system", 00:23:31.927 "dma_device_type": 1 00:23:31.927 }, 00:23:31.927 { 00:23:31.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:31.927 "dma_device_type": 2 00:23:31.927 } 00:23:31.927 ], 00:23:31.927 "driver_specific": {} 00:23:31.927 } 00:23:31.927 ] 00:23:31.927 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.927 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:23:31.927 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:23:31.927 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:31.927 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:31.927 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:31.927 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:31.927 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:31.927 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:31.927 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:31.927 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:31.927 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:31.927 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.927 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:31.927 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.927 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.927 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.186 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:32.186 "name": "Existed_Raid", 00:23:32.186 "uuid": "5f9a79d3-d9e7-4d26-ab97-107cd207e8f3", 00:23:32.186 "strip_size_kb": 0, 00:23:32.186 "state": "online", 00:23:32.186 "raid_level": "raid1", 00:23:32.186 "superblock": false, 00:23:32.186 "num_base_bdevs": 3, 00:23:32.186 "num_base_bdevs_discovered": 3, 00:23:32.186 "num_base_bdevs_operational": 3, 00:23:32.186 "base_bdevs_list": [ 00:23:32.186 { 00:23:32.186 "name": "NewBaseBdev", 00:23:32.186 "uuid": "d6da4d33-bec8-479e-a9e7-2a00ff058b71", 00:23:32.186 "is_configured": true, 00:23:32.186 "data_offset": 0, 00:23:32.186 "data_size": 65536 00:23:32.186 }, 00:23:32.186 { 00:23:32.186 "name": "BaseBdev2", 00:23:32.186 "uuid": "8371042d-cb5a-43e7-bc53-219302c70fe1", 00:23:32.186 "is_configured": true, 00:23:32.186 "data_offset": 0, 00:23:32.186 "data_size": 65536 00:23:32.186 }, 00:23:32.186 { 00:23:32.186 "name": "BaseBdev3", 00:23:32.186 "uuid": "9a502df5-3873-4e1e-9343-49cc6bef5a6b", 00:23:32.186 "is_configured": true, 00:23:32.186 "data_offset": 0, 00:23:32.186 "data_size": 65536 00:23:32.186 } 00:23:32.186 ] 00:23:32.186 }' 00:23:32.186 13:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:32.186 13:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.445 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:23:32.445 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:32.445 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:32.445 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:32.445 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:32.445 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:32.445 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:32.445 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.445 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.445 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:32.445 [2024-11-20 13:43:35.359636] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:32.704 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.704 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:32.704 "name": "Existed_Raid", 00:23:32.704 "aliases": [ 00:23:32.704 "5f9a79d3-d9e7-4d26-ab97-107cd207e8f3" 00:23:32.704 ], 00:23:32.704 "product_name": "Raid Volume", 00:23:32.704 "block_size": 512, 00:23:32.704 "num_blocks": 65536, 00:23:32.704 "uuid": "5f9a79d3-d9e7-4d26-ab97-107cd207e8f3", 00:23:32.704 "assigned_rate_limits": { 00:23:32.704 "rw_ios_per_sec": 0, 00:23:32.704 "rw_mbytes_per_sec": 0, 00:23:32.704 "r_mbytes_per_sec": 0, 00:23:32.704 "w_mbytes_per_sec": 0 00:23:32.704 }, 00:23:32.704 "claimed": false, 00:23:32.704 "zoned": false, 00:23:32.704 "supported_io_types": { 00:23:32.704 "read": true, 00:23:32.704 "write": true, 00:23:32.704 "unmap": false, 00:23:32.704 "flush": false, 00:23:32.704 "reset": true, 00:23:32.704 "nvme_admin": false, 00:23:32.704 "nvme_io": false, 00:23:32.704 "nvme_io_md": false, 00:23:32.704 "write_zeroes": true, 00:23:32.704 "zcopy": false, 00:23:32.704 "get_zone_info": false, 00:23:32.704 "zone_management": false, 00:23:32.704 "zone_append": false, 00:23:32.704 "compare": false, 00:23:32.704 "compare_and_write": false, 00:23:32.704 "abort": false, 00:23:32.705 "seek_hole": false, 00:23:32.705 "seek_data": false, 00:23:32.705 "copy": false, 00:23:32.705 "nvme_iov_md": false 00:23:32.705 }, 00:23:32.705 "memory_domains": [ 00:23:32.705 { 00:23:32.705 "dma_device_id": "system", 00:23:32.705 "dma_device_type": 1 00:23:32.705 }, 00:23:32.705 { 00:23:32.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:32.705 "dma_device_type": 2 00:23:32.705 }, 00:23:32.705 { 00:23:32.705 "dma_device_id": "system", 00:23:32.705 "dma_device_type": 1 00:23:32.705 }, 00:23:32.705 { 00:23:32.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:32.705 "dma_device_type": 2 00:23:32.705 }, 00:23:32.705 { 00:23:32.705 "dma_device_id": "system", 00:23:32.705 "dma_device_type": 1 00:23:32.705 }, 00:23:32.705 { 00:23:32.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:32.705 "dma_device_type": 2 00:23:32.705 } 00:23:32.705 ], 00:23:32.705 "driver_specific": { 00:23:32.705 "raid": { 00:23:32.705 "uuid": "5f9a79d3-d9e7-4d26-ab97-107cd207e8f3", 00:23:32.705 "strip_size_kb": 0, 00:23:32.705 "state": "online", 00:23:32.705 "raid_level": "raid1", 00:23:32.705 "superblock": false, 00:23:32.705 "num_base_bdevs": 3, 00:23:32.705 "num_base_bdevs_discovered": 3, 00:23:32.705 "num_base_bdevs_operational": 3, 00:23:32.705 "base_bdevs_list": [ 00:23:32.705 { 00:23:32.705 "name": "NewBaseBdev", 00:23:32.705 "uuid": "d6da4d33-bec8-479e-a9e7-2a00ff058b71", 00:23:32.705 "is_configured": true, 00:23:32.705 "data_offset": 0, 00:23:32.705 "data_size": 65536 00:23:32.705 }, 00:23:32.705 { 00:23:32.705 "name": "BaseBdev2", 00:23:32.705 "uuid": "8371042d-cb5a-43e7-bc53-219302c70fe1", 00:23:32.705 "is_configured": true, 00:23:32.705 "data_offset": 0, 00:23:32.705 "data_size": 65536 00:23:32.705 }, 00:23:32.705 { 00:23:32.705 "name": "BaseBdev3", 00:23:32.705 "uuid": "9a502df5-3873-4e1e-9343-49cc6bef5a6b", 00:23:32.705 "is_configured": true, 00:23:32.705 "data_offset": 0, 00:23:32.705 "data_size": 65536 00:23:32.705 } 00:23:32.705 ] 00:23:32.705 } 00:23:32.705 } 00:23:32.705 }' 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:23:32.705 BaseBdev2 00:23:32.705 BaseBdev3' 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.705 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.964 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.964 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:32.964 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:32.964 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:32.964 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.964 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.964 [2024-11-20 13:43:35.667344] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:32.964 [2024-11-20 13:43:35.667426] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:32.964 [2024-11-20 13:43:35.667522] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:32.964 [2024-11-20 13:43:35.667949] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:32.964 [2024-11-20 13:43:35.668169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:23:32.964 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.964 13:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67625 00:23:32.964 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67625 ']' 00:23:32.964 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67625 00:23:32.964 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:23:32.964 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.964 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67625 00:23:32.964 killing process with pid 67625 00:23:32.964 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:32.964 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:32.964 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67625' 00:23:32.964 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67625 00:23:32.964 [2024-11-20 13:43:35.705586] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:32.964 13:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67625 00:23:33.222 [2024-11-20 13:43:35.980007] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:34.599 13:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:23:34.599 00:23:34.599 real 0m11.901s 00:23:34.599 user 0m19.657s 00:23:34.599 sys 0m1.659s 00:23:34.599 13:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:34.599 ************************************ 00:23:34.599 END TEST raid_state_function_test 00:23:34.599 ************************************ 00:23:34.599 13:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.599 13:43:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:23:34.599 13:43:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:34.599 13:43:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:34.599 13:43:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:34.599 ************************************ 00:23:34.599 START TEST raid_state_function_test_sb 00:23:34.599 ************************************ 00:23:34.599 13:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:23:34.599 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:23:34.599 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:23:34.599 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:34.599 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:34.599 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:34.599 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:34.599 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:34.599 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:34.599 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:34.599 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:34.599 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:34.599 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:34.599 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:23:34.599 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:34.599 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:34.599 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:34.599 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:34.599 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:34.599 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:34.600 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:34.600 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:34.600 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:23:34.600 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:23:34.600 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:34.600 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:34.600 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68263 00:23:34.600 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:34.600 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68263' 00:23:34.600 Process raid pid: 68263 00:23:34.600 13:43:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68263 00:23:34.600 13:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68263 ']' 00:23:34.600 13:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.600 13:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.600 13:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.600 13:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.600 13:43:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:34.600 [2024-11-20 13:43:37.257025] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:23:34.600 [2024-11-20 13:43:37.257461] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.600 [2024-11-20 13:43:37.449574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.858 [2024-11-20 13:43:37.618733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.117 [2024-11-20 13:43:37.865163] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:35.117 [2024-11-20 13:43:37.865383] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:35.438 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.438 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:23:35.438 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:23:35.438 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.438 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.438 [2024-11-20 13:43:38.285167] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:35.438 [2024-11-20 13:43:38.285236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:35.438 [2024-11-20 13:43:38.285254] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:35.438 [2024-11-20 13:43:38.285272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:35.438 [2024-11-20 13:43:38.285282] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:35.438 [2024-11-20 13:43:38.285297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:35.438 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.438 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:35.438 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:35.438 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:35.438 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:35.438 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:35.438 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:35.438 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:35.438 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:35.438 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:35.438 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:35.438 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.438 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:35.439 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.439 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.439 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.716 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:35.716 "name": "Existed_Raid", 00:23:35.716 "uuid": "37c091c5-5075-48e4-8df3-e76e4c03765e", 00:23:35.716 "strip_size_kb": 0, 00:23:35.716 "state": "configuring", 00:23:35.716 "raid_level": "raid1", 00:23:35.716 "superblock": true, 00:23:35.716 "num_base_bdevs": 3, 00:23:35.716 "num_base_bdevs_discovered": 0, 00:23:35.716 "num_base_bdevs_operational": 3, 00:23:35.716 "base_bdevs_list": [ 00:23:35.716 { 00:23:35.716 "name": "BaseBdev1", 00:23:35.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.716 "is_configured": false, 00:23:35.716 "data_offset": 0, 00:23:35.716 "data_size": 0 00:23:35.716 }, 00:23:35.716 { 00:23:35.716 "name": "BaseBdev2", 00:23:35.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.716 "is_configured": false, 00:23:35.716 "data_offset": 0, 00:23:35.716 "data_size": 0 00:23:35.716 }, 00:23:35.716 { 00:23:35.716 "name": "BaseBdev3", 00:23:35.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.716 "is_configured": false, 00:23:35.716 "data_offset": 0, 00:23:35.716 "data_size": 0 00:23:35.716 } 00:23:35.716 ] 00:23:35.716 }' 00:23:35.716 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:35.716 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.975 [2024-11-20 13:43:38.805232] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:35.975 [2024-11-20 13:43:38.805308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.975 [2024-11-20 13:43:38.813211] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:35.975 [2024-11-20 13:43:38.813269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:35.975 [2024-11-20 13:43:38.813286] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:35.975 [2024-11-20 13:43:38.813302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:35.975 [2024-11-20 13:43:38.813313] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:35.975 [2024-11-20 13:43:38.813327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.975 [2024-11-20 13:43:38.858752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:35.975 BaseBdev1 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.975 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:35.975 [ 00:23:35.975 { 00:23:35.975 "name": "BaseBdev1", 00:23:35.975 "aliases": [ 00:23:35.975 "5581aa93-459e-4010-81ae-c54cd7ea7960" 00:23:35.975 ], 00:23:35.975 "product_name": "Malloc disk", 00:23:35.975 "block_size": 512, 00:23:35.975 "num_blocks": 65536, 00:23:35.975 "uuid": "5581aa93-459e-4010-81ae-c54cd7ea7960", 00:23:35.975 "assigned_rate_limits": { 00:23:35.975 "rw_ios_per_sec": 0, 00:23:35.975 "rw_mbytes_per_sec": 0, 00:23:35.975 "r_mbytes_per_sec": 0, 00:23:35.975 "w_mbytes_per_sec": 0 00:23:35.975 }, 00:23:35.975 "claimed": true, 00:23:35.975 "claim_type": "exclusive_write", 00:23:35.975 "zoned": false, 00:23:35.975 "supported_io_types": { 00:23:35.975 "read": true, 00:23:35.975 "write": true, 00:23:35.975 "unmap": true, 00:23:35.975 "flush": true, 00:23:35.975 "reset": true, 00:23:35.975 "nvme_admin": false, 00:23:35.975 "nvme_io": false, 00:23:35.975 "nvme_io_md": false, 00:23:35.975 "write_zeroes": true, 00:23:36.234 "zcopy": true, 00:23:36.234 "get_zone_info": false, 00:23:36.234 "zone_management": false, 00:23:36.234 "zone_append": false, 00:23:36.234 "compare": false, 00:23:36.234 "compare_and_write": false, 00:23:36.234 "abort": true, 00:23:36.234 "seek_hole": false, 00:23:36.234 "seek_data": false, 00:23:36.234 "copy": true, 00:23:36.234 "nvme_iov_md": false 00:23:36.234 }, 00:23:36.234 "memory_domains": [ 00:23:36.234 { 00:23:36.234 "dma_device_id": "system", 00:23:36.234 "dma_device_type": 1 00:23:36.234 }, 00:23:36.234 { 00:23:36.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:36.234 "dma_device_type": 2 00:23:36.234 } 00:23:36.234 ], 00:23:36.234 "driver_specific": {} 00:23:36.234 } 00:23:36.234 ] 00:23:36.234 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.234 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:36.234 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:36.234 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:36.234 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:36.234 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:36.234 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:36.234 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:36.234 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:36.234 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:36.234 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:36.234 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:36.234 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.234 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:36.234 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.234 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:36.234 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.234 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:36.234 "name": "Existed_Raid", 00:23:36.234 "uuid": "eccd6022-f9e1-4d61-899e-efa8758a37b9", 00:23:36.234 "strip_size_kb": 0, 00:23:36.234 "state": "configuring", 00:23:36.234 "raid_level": "raid1", 00:23:36.234 "superblock": true, 00:23:36.234 "num_base_bdevs": 3, 00:23:36.234 "num_base_bdevs_discovered": 1, 00:23:36.234 "num_base_bdevs_operational": 3, 00:23:36.234 "base_bdevs_list": [ 00:23:36.234 { 00:23:36.234 "name": "BaseBdev1", 00:23:36.234 "uuid": "5581aa93-459e-4010-81ae-c54cd7ea7960", 00:23:36.234 "is_configured": true, 00:23:36.234 "data_offset": 2048, 00:23:36.234 "data_size": 63488 00:23:36.234 }, 00:23:36.234 { 00:23:36.234 "name": "BaseBdev2", 00:23:36.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.234 "is_configured": false, 00:23:36.234 "data_offset": 0, 00:23:36.234 "data_size": 0 00:23:36.234 }, 00:23:36.234 { 00:23:36.234 "name": "BaseBdev3", 00:23:36.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.234 "is_configured": false, 00:23:36.234 "data_offset": 0, 00:23:36.234 "data_size": 0 00:23:36.234 } 00:23:36.234 ] 00:23:36.234 }' 00:23:36.235 13:43:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:36.235 13:43:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:36.801 [2024-11-20 13:43:39.434986] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:36.801 [2024-11-20 13:43:39.435053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:36.801 [2024-11-20 13:43:39.447066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:36.801 [2024-11-20 13:43:39.449638] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:36.801 [2024-11-20 13:43:39.449818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:36.801 [2024-11-20 13:43:39.449959] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:36.801 [2024-11-20 13:43:39.450128] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:36.801 "name": "Existed_Raid", 00:23:36.801 "uuid": "9bb8966b-379f-4c48-b0cb-443e9820130b", 00:23:36.801 "strip_size_kb": 0, 00:23:36.801 "state": "configuring", 00:23:36.801 "raid_level": "raid1", 00:23:36.801 "superblock": true, 00:23:36.801 "num_base_bdevs": 3, 00:23:36.801 "num_base_bdevs_discovered": 1, 00:23:36.801 "num_base_bdevs_operational": 3, 00:23:36.801 "base_bdevs_list": [ 00:23:36.801 { 00:23:36.801 "name": "BaseBdev1", 00:23:36.801 "uuid": "5581aa93-459e-4010-81ae-c54cd7ea7960", 00:23:36.801 "is_configured": true, 00:23:36.801 "data_offset": 2048, 00:23:36.801 "data_size": 63488 00:23:36.801 }, 00:23:36.801 { 00:23:36.801 "name": "BaseBdev2", 00:23:36.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.801 "is_configured": false, 00:23:36.801 "data_offset": 0, 00:23:36.801 "data_size": 0 00:23:36.801 }, 00:23:36.801 { 00:23:36.801 "name": "BaseBdev3", 00:23:36.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.801 "is_configured": false, 00:23:36.801 "data_offset": 0, 00:23:36.801 "data_size": 0 00:23:36.801 } 00:23:36.801 ] 00:23:36.801 }' 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:36.801 13:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:37.061 13:43:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:37.061 13:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.061 13:43:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:37.320 [2024-11-20 13:43:40.002884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:37.320 BaseBdev2 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:37.320 [ 00:23:37.320 { 00:23:37.320 "name": "BaseBdev2", 00:23:37.320 "aliases": [ 00:23:37.320 "4a510ab0-a08c-408d-8e14-10f4f1dd00a5" 00:23:37.320 ], 00:23:37.320 "product_name": "Malloc disk", 00:23:37.320 "block_size": 512, 00:23:37.320 "num_blocks": 65536, 00:23:37.320 "uuid": "4a510ab0-a08c-408d-8e14-10f4f1dd00a5", 00:23:37.320 "assigned_rate_limits": { 00:23:37.320 "rw_ios_per_sec": 0, 00:23:37.320 "rw_mbytes_per_sec": 0, 00:23:37.320 "r_mbytes_per_sec": 0, 00:23:37.320 "w_mbytes_per_sec": 0 00:23:37.320 }, 00:23:37.320 "claimed": true, 00:23:37.320 "claim_type": "exclusive_write", 00:23:37.320 "zoned": false, 00:23:37.320 "supported_io_types": { 00:23:37.320 "read": true, 00:23:37.320 "write": true, 00:23:37.320 "unmap": true, 00:23:37.320 "flush": true, 00:23:37.320 "reset": true, 00:23:37.320 "nvme_admin": false, 00:23:37.320 "nvme_io": false, 00:23:37.320 "nvme_io_md": false, 00:23:37.320 "write_zeroes": true, 00:23:37.320 "zcopy": true, 00:23:37.320 "get_zone_info": false, 00:23:37.320 "zone_management": false, 00:23:37.320 "zone_append": false, 00:23:37.320 "compare": false, 00:23:37.320 "compare_and_write": false, 00:23:37.320 "abort": true, 00:23:37.320 "seek_hole": false, 00:23:37.320 "seek_data": false, 00:23:37.320 "copy": true, 00:23:37.320 "nvme_iov_md": false 00:23:37.320 }, 00:23:37.320 "memory_domains": [ 00:23:37.320 { 00:23:37.320 "dma_device_id": "system", 00:23:37.320 "dma_device_type": 1 00:23:37.320 }, 00:23:37.320 { 00:23:37.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:37.320 "dma_device_type": 2 00:23:37.320 } 00:23:37.320 ], 00:23:37.320 "driver_specific": {} 00:23:37.320 } 00:23:37.320 ] 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:37.320 "name": "Existed_Raid", 00:23:37.320 "uuid": "9bb8966b-379f-4c48-b0cb-443e9820130b", 00:23:37.320 "strip_size_kb": 0, 00:23:37.320 "state": "configuring", 00:23:37.320 "raid_level": "raid1", 00:23:37.320 "superblock": true, 00:23:37.320 "num_base_bdevs": 3, 00:23:37.320 "num_base_bdevs_discovered": 2, 00:23:37.320 "num_base_bdevs_operational": 3, 00:23:37.320 "base_bdevs_list": [ 00:23:37.320 { 00:23:37.320 "name": "BaseBdev1", 00:23:37.320 "uuid": "5581aa93-459e-4010-81ae-c54cd7ea7960", 00:23:37.320 "is_configured": true, 00:23:37.320 "data_offset": 2048, 00:23:37.320 "data_size": 63488 00:23:37.320 }, 00:23:37.320 { 00:23:37.320 "name": "BaseBdev2", 00:23:37.320 "uuid": "4a510ab0-a08c-408d-8e14-10f4f1dd00a5", 00:23:37.320 "is_configured": true, 00:23:37.320 "data_offset": 2048, 00:23:37.320 "data_size": 63488 00:23:37.320 }, 00:23:37.320 { 00:23:37.320 "name": "BaseBdev3", 00:23:37.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.320 "is_configured": false, 00:23:37.320 "data_offset": 0, 00:23:37.320 "data_size": 0 00:23:37.320 } 00:23:37.320 ] 00:23:37.320 }' 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:37.320 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:37.886 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:37.886 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.886 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:37.886 [2024-11-20 13:43:40.604333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:37.886 [2024-11-20 13:43:40.604693] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:37.886 [2024-11-20 13:43:40.604724] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:37.886 [2024-11-20 13:43:40.605112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:37.886 BaseBdev3 00:23:37.886 [2024-11-20 13:43:40.605321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:37.886 [2024-11-20 13:43:40.605338] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:37.886 [2024-11-20 13:43:40.605528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:37.886 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.886 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:23:37.886 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:23:37.886 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:37.886 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:37.886 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:37.886 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:37.886 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:37.887 [ 00:23:37.887 { 00:23:37.887 "name": "BaseBdev3", 00:23:37.887 "aliases": [ 00:23:37.887 "6a1e692b-083c-4ac9-b621-20178bd490b4" 00:23:37.887 ], 00:23:37.887 "product_name": "Malloc disk", 00:23:37.887 "block_size": 512, 00:23:37.887 "num_blocks": 65536, 00:23:37.887 "uuid": "6a1e692b-083c-4ac9-b621-20178bd490b4", 00:23:37.887 "assigned_rate_limits": { 00:23:37.887 "rw_ios_per_sec": 0, 00:23:37.887 "rw_mbytes_per_sec": 0, 00:23:37.887 "r_mbytes_per_sec": 0, 00:23:37.887 "w_mbytes_per_sec": 0 00:23:37.887 }, 00:23:37.887 "claimed": true, 00:23:37.887 "claim_type": "exclusive_write", 00:23:37.887 "zoned": false, 00:23:37.887 "supported_io_types": { 00:23:37.887 "read": true, 00:23:37.887 "write": true, 00:23:37.887 "unmap": true, 00:23:37.887 "flush": true, 00:23:37.887 "reset": true, 00:23:37.887 "nvme_admin": false, 00:23:37.887 "nvme_io": false, 00:23:37.887 "nvme_io_md": false, 00:23:37.887 "write_zeroes": true, 00:23:37.887 "zcopy": true, 00:23:37.887 "get_zone_info": false, 00:23:37.887 "zone_management": false, 00:23:37.887 "zone_append": false, 00:23:37.887 "compare": false, 00:23:37.887 "compare_and_write": false, 00:23:37.887 "abort": true, 00:23:37.887 "seek_hole": false, 00:23:37.887 "seek_data": false, 00:23:37.887 "copy": true, 00:23:37.887 "nvme_iov_md": false 00:23:37.887 }, 00:23:37.887 "memory_domains": [ 00:23:37.887 { 00:23:37.887 "dma_device_id": "system", 00:23:37.887 "dma_device_type": 1 00:23:37.887 }, 00:23:37.887 { 00:23:37.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:37.887 "dma_device_type": 2 00:23:37.887 } 00:23:37.887 ], 00:23:37.887 "driver_specific": {} 00:23:37.887 } 00:23:37.887 ] 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:37.887 "name": "Existed_Raid", 00:23:37.887 "uuid": "9bb8966b-379f-4c48-b0cb-443e9820130b", 00:23:37.887 "strip_size_kb": 0, 00:23:37.887 "state": "online", 00:23:37.887 "raid_level": "raid1", 00:23:37.887 "superblock": true, 00:23:37.887 "num_base_bdevs": 3, 00:23:37.887 "num_base_bdevs_discovered": 3, 00:23:37.887 "num_base_bdevs_operational": 3, 00:23:37.887 "base_bdevs_list": [ 00:23:37.887 { 00:23:37.887 "name": "BaseBdev1", 00:23:37.887 "uuid": "5581aa93-459e-4010-81ae-c54cd7ea7960", 00:23:37.887 "is_configured": true, 00:23:37.887 "data_offset": 2048, 00:23:37.887 "data_size": 63488 00:23:37.887 }, 00:23:37.887 { 00:23:37.887 "name": "BaseBdev2", 00:23:37.887 "uuid": "4a510ab0-a08c-408d-8e14-10f4f1dd00a5", 00:23:37.887 "is_configured": true, 00:23:37.887 "data_offset": 2048, 00:23:37.887 "data_size": 63488 00:23:37.887 }, 00:23:37.887 { 00:23:37.887 "name": "BaseBdev3", 00:23:37.887 "uuid": "6a1e692b-083c-4ac9-b621-20178bd490b4", 00:23:37.887 "is_configured": true, 00:23:37.887 "data_offset": 2048, 00:23:37.887 "data_size": 63488 00:23:37.887 } 00:23:37.887 ] 00:23:37.887 }' 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:37.887 13:43:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:38.455 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:38.455 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:38.455 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:38.455 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:38.455 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:23:38.455 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:38.455 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:38.455 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:38.455 13:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.455 13:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:38.455 [2024-11-20 13:43:41.220973] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:38.455 13:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.455 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:38.455 "name": "Existed_Raid", 00:23:38.455 "aliases": [ 00:23:38.455 "9bb8966b-379f-4c48-b0cb-443e9820130b" 00:23:38.455 ], 00:23:38.455 "product_name": "Raid Volume", 00:23:38.455 "block_size": 512, 00:23:38.455 "num_blocks": 63488, 00:23:38.455 "uuid": "9bb8966b-379f-4c48-b0cb-443e9820130b", 00:23:38.455 "assigned_rate_limits": { 00:23:38.455 "rw_ios_per_sec": 0, 00:23:38.455 "rw_mbytes_per_sec": 0, 00:23:38.455 "r_mbytes_per_sec": 0, 00:23:38.455 "w_mbytes_per_sec": 0 00:23:38.455 }, 00:23:38.455 "claimed": false, 00:23:38.455 "zoned": false, 00:23:38.455 "supported_io_types": { 00:23:38.455 "read": true, 00:23:38.455 "write": true, 00:23:38.455 "unmap": false, 00:23:38.455 "flush": false, 00:23:38.455 "reset": true, 00:23:38.455 "nvme_admin": false, 00:23:38.455 "nvme_io": false, 00:23:38.455 "nvme_io_md": false, 00:23:38.455 "write_zeroes": true, 00:23:38.455 "zcopy": false, 00:23:38.455 "get_zone_info": false, 00:23:38.455 "zone_management": false, 00:23:38.455 "zone_append": false, 00:23:38.455 "compare": false, 00:23:38.455 "compare_and_write": false, 00:23:38.455 "abort": false, 00:23:38.455 "seek_hole": false, 00:23:38.455 "seek_data": false, 00:23:38.455 "copy": false, 00:23:38.455 "nvme_iov_md": false 00:23:38.455 }, 00:23:38.455 "memory_domains": [ 00:23:38.455 { 00:23:38.455 "dma_device_id": "system", 00:23:38.455 "dma_device_type": 1 00:23:38.455 }, 00:23:38.455 { 00:23:38.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.455 "dma_device_type": 2 00:23:38.455 }, 00:23:38.455 { 00:23:38.455 "dma_device_id": "system", 00:23:38.455 "dma_device_type": 1 00:23:38.455 }, 00:23:38.455 { 00:23:38.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.455 "dma_device_type": 2 00:23:38.455 }, 00:23:38.455 { 00:23:38.455 "dma_device_id": "system", 00:23:38.455 "dma_device_type": 1 00:23:38.455 }, 00:23:38.455 { 00:23:38.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.455 "dma_device_type": 2 00:23:38.455 } 00:23:38.455 ], 00:23:38.455 "driver_specific": { 00:23:38.455 "raid": { 00:23:38.455 "uuid": "9bb8966b-379f-4c48-b0cb-443e9820130b", 00:23:38.455 "strip_size_kb": 0, 00:23:38.455 "state": "online", 00:23:38.455 "raid_level": "raid1", 00:23:38.455 "superblock": true, 00:23:38.455 "num_base_bdevs": 3, 00:23:38.455 "num_base_bdevs_discovered": 3, 00:23:38.455 "num_base_bdevs_operational": 3, 00:23:38.455 "base_bdevs_list": [ 00:23:38.455 { 00:23:38.455 "name": "BaseBdev1", 00:23:38.455 "uuid": "5581aa93-459e-4010-81ae-c54cd7ea7960", 00:23:38.455 "is_configured": true, 00:23:38.455 "data_offset": 2048, 00:23:38.455 "data_size": 63488 00:23:38.455 }, 00:23:38.455 { 00:23:38.455 "name": "BaseBdev2", 00:23:38.455 "uuid": "4a510ab0-a08c-408d-8e14-10f4f1dd00a5", 00:23:38.455 "is_configured": true, 00:23:38.455 "data_offset": 2048, 00:23:38.455 "data_size": 63488 00:23:38.455 }, 00:23:38.455 { 00:23:38.455 "name": "BaseBdev3", 00:23:38.455 "uuid": "6a1e692b-083c-4ac9-b621-20178bd490b4", 00:23:38.455 "is_configured": true, 00:23:38.455 "data_offset": 2048, 00:23:38.455 "data_size": 63488 00:23:38.455 } 00:23:38.455 ] 00:23:38.455 } 00:23:38.455 } 00:23:38.455 }' 00:23:38.455 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:38.455 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:38.455 BaseBdev2 00:23:38.455 BaseBdev3' 00:23:38.455 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:38.715 [2024-11-20 13:43:41.524696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.715 13:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:38.974 13:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.974 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:38.974 "name": "Existed_Raid", 00:23:38.974 "uuid": "9bb8966b-379f-4c48-b0cb-443e9820130b", 00:23:38.974 "strip_size_kb": 0, 00:23:38.974 "state": "online", 00:23:38.974 "raid_level": "raid1", 00:23:38.974 "superblock": true, 00:23:38.974 "num_base_bdevs": 3, 00:23:38.974 "num_base_bdevs_discovered": 2, 00:23:38.974 "num_base_bdevs_operational": 2, 00:23:38.974 "base_bdevs_list": [ 00:23:38.974 { 00:23:38.974 "name": null, 00:23:38.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.974 "is_configured": false, 00:23:38.974 "data_offset": 0, 00:23:38.974 "data_size": 63488 00:23:38.974 }, 00:23:38.974 { 00:23:38.974 "name": "BaseBdev2", 00:23:38.974 "uuid": "4a510ab0-a08c-408d-8e14-10f4f1dd00a5", 00:23:38.974 "is_configured": true, 00:23:38.974 "data_offset": 2048, 00:23:38.974 "data_size": 63488 00:23:38.974 }, 00:23:38.974 { 00:23:38.974 "name": "BaseBdev3", 00:23:38.974 "uuid": "6a1e692b-083c-4ac9-b621-20178bd490b4", 00:23:38.974 "is_configured": true, 00:23:38.974 "data_offset": 2048, 00:23:38.974 "data_size": 63488 00:23:38.974 } 00:23:38.974 ] 00:23:38.974 }' 00:23:38.974 13:43:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:38.974 13:43:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.233 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:39.233 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:39.233 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.233 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.233 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.233 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:39.492 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.492 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:39.492 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:39.492 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:39.492 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.492 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.492 [2024-11-20 13:43:42.187994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:39.492 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.492 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:39.492 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:39.492 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:39.492 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.492 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.492 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.492 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.492 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:39.492 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:39.492 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:23:39.492 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.492 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.492 [2024-11-20 13:43:42.327175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:39.492 [2024-11-20 13:43:42.327313] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:39.751 [2024-11-20 13:43:42.414213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:39.751 [2024-11-20 13:43:42.414287] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:39.751 [2024-11-20 13:43:42.414307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.751 BaseBdev2 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.751 [ 00:23:39.751 { 00:23:39.751 "name": "BaseBdev2", 00:23:39.751 "aliases": [ 00:23:39.751 "7ad0556d-f85e-487b-afc7-8c95700a6634" 00:23:39.751 ], 00:23:39.751 "product_name": "Malloc disk", 00:23:39.751 "block_size": 512, 00:23:39.751 "num_blocks": 65536, 00:23:39.751 "uuid": "7ad0556d-f85e-487b-afc7-8c95700a6634", 00:23:39.751 "assigned_rate_limits": { 00:23:39.751 "rw_ios_per_sec": 0, 00:23:39.751 "rw_mbytes_per_sec": 0, 00:23:39.751 "r_mbytes_per_sec": 0, 00:23:39.751 "w_mbytes_per_sec": 0 00:23:39.751 }, 00:23:39.751 "claimed": false, 00:23:39.751 "zoned": false, 00:23:39.751 "supported_io_types": { 00:23:39.751 "read": true, 00:23:39.751 "write": true, 00:23:39.751 "unmap": true, 00:23:39.751 "flush": true, 00:23:39.751 "reset": true, 00:23:39.751 "nvme_admin": false, 00:23:39.751 "nvme_io": false, 00:23:39.751 "nvme_io_md": false, 00:23:39.751 "write_zeroes": true, 00:23:39.751 "zcopy": true, 00:23:39.751 "get_zone_info": false, 00:23:39.751 "zone_management": false, 00:23:39.751 "zone_append": false, 00:23:39.751 "compare": false, 00:23:39.751 "compare_and_write": false, 00:23:39.751 "abort": true, 00:23:39.751 "seek_hole": false, 00:23:39.751 "seek_data": false, 00:23:39.751 "copy": true, 00:23:39.751 "nvme_iov_md": false 00:23:39.751 }, 00:23:39.751 "memory_domains": [ 00:23:39.751 { 00:23:39.751 "dma_device_id": "system", 00:23:39.751 "dma_device_type": 1 00:23:39.751 }, 00:23:39.751 { 00:23:39.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:39.751 "dma_device_type": 2 00:23:39.751 } 00:23:39.751 ], 00:23:39.751 "driver_specific": {} 00:23:39.751 } 00:23:39.751 ] 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.751 BaseBdev3 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.751 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.751 [ 00:23:39.751 { 00:23:39.751 "name": "BaseBdev3", 00:23:39.751 "aliases": [ 00:23:39.751 "0bfc39c4-993b-4d1e-af5e-4da46d8b6aeb" 00:23:39.751 ], 00:23:39.751 "product_name": "Malloc disk", 00:23:39.751 "block_size": 512, 00:23:39.751 "num_blocks": 65536, 00:23:39.751 "uuid": "0bfc39c4-993b-4d1e-af5e-4da46d8b6aeb", 00:23:39.751 "assigned_rate_limits": { 00:23:39.751 "rw_ios_per_sec": 0, 00:23:39.751 "rw_mbytes_per_sec": 0, 00:23:39.751 "r_mbytes_per_sec": 0, 00:23:39.751 "w_mbytes_per_sec": 0 00:23:39.751 }, 00:23:39.751 "claimed": false, 00:23:39.751 "zoned": false, 00:23:39.751 "supported_io_types": { 00:23:39.751 "read": true, 00:23:39.751 "write": true, 00:23:39.751 "unmap": true, 00:23:39.752 "flush": true, 00:23:39.752 "reset": true, 00:23:39.752 "nvme_admin": false, 00:23:39.752 "nvme_io": false, 00:23:39.752 "nvme_io_md": false, 00:23:39.752 "write_zeroes": true, 00:23:39.752 "zcopy": true, 00:23:39.752 "get_zone_info": false, 00:23:39.752 "zone_management": false, 00:23:39.752 "zone_append": false, 00:23:39.752 "compare": false, 00:23:39.752 "compare_and_write": false, 00:23:39.752 "abort": true, 00:23:39.752 "seek_hole": false, 00:23:39.752 "seek_data": false, 00:23:39.752 "copy": true, 00:23:39.752 "nvme_iov_md": false 00:23:39.752 }, 00:23:39.752 "memory_domains": [ 00:23:39.752 { 00:23:39.752 "dma_device_id": "system", 00:23:39.752 "dma_device_type": 1 00:23:39.752 }, 00:23:39.752 { 00:23:39.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:39.752 "dma_device_type": 2 00:23:39.752 } 00:23:39.752 ], 00:23:39.752 "driver_specific": {} 00:23:39.752 } 00:23:39.752 ] 00:23:39.752 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.752 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:39.752 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:23:39.752 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:23:39.752 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:23:39.752 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.752 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.752 [2024-11-20 13:43:42.618475] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:39.752 [2024-11-20 13:43:42.618538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:39.752 [2024-11-20 13:43:42.618588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:39.752 [2024-11-20 13:43:42.621478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:39.752 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.752 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:39.752 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:39.752 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:39.752 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:39.752 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:39.752 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:39.752 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:39.752 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:39.752 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:39.752 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:39.752 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:39.752 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:39.752 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.752 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.752 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.009 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:40.009 "name": "Existed_Raid", 00:23:40.009 "uuid": "552375c1-d310-4f9f-b150-e6d525fc41f5", 00:23:40.009 "strip_size_kb": 0, 00:23:40.009 "state": "configuring", 00:23:40.009 "raid_level": "raid1", 00:23:40.009 "superblock": true, 00:23:40.009 "num_base_bdevs": 3, 00:23:40.009 "num_base_bdevs_discovered": 2, 00:23:40.009 "num_base_bdevs_operational": 3, 00:23:40.009 "base_bdevs_list": [ 00:23:40.009 { 00:23:40.009 "name": "BaseBdev1", 00:23:40.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.009 "is_configured": false, 00:23:40.009 "data_offset": 0, 00:23:40.009 "data_size": 0 00:23:40.009 }, 00:23:40.009 { 00:23:40.009 "name": "BaseBdev2", 00:23:40.009 "uuid": "7ad0556d-f85e-487b-afc7-8c95700a6634", 00:23:40.009 "is_configured": true, 00:23:40.009 "data_offset": 2048, 00:23:40.009 "data_size": 63488 00:23:40.009 }, 00:23:40.009 { 00:23:40.009 "name": "BaseBdev3", 00:23:40.009 "uuid": "0bfc39c4-993b-4d1e-af5e-4da46d8b6aeb", 00:23:40.009 "is_configured": true, 00:23:40.009 "data_offset": 2048, 00:23:40.009 "data_size": 63488 00:23:40.009 } 00:23:40.009 ] 00:23:40.009 }' 00:23:40.009 13:43:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:40.009 13:43:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:40.267 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:23:40.267 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.267 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:40.267 [2024-11-20 13:43:43.126689] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:40.267 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.267 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:40.267 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:40.267 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:40.267 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:40.267 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:40.267 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:40.267 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:40.267 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:40.267 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:40.267 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:40.267 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.267 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.267 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:40.267 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:40.267 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.524 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:40.524 "name": "Existed_Raid", 00:23:40.524 "uuid": "552375c1-d310-4f9f-b150-e6d525fc41f5", 00:23:40.524 "strip_size_kb": 0, 00:23:40.524 "state": "configuring", 00:23:40.524 "raid_level": "raid1", 00:23:40.524 "superblock": true, 00:23:40.524 "num_base_bdevs": 3, 00:23:40.524 "num_base_bdevs_discovered": 1, 00:23:40.524 "num_base_bdevs_operational": 3, 00:23:40.524 "base_bdevs_list": [ 00:23:40.524 { 00:23:40.524 "name": "BaseBdev1", 00:23:40.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.524 "is_configured": false, 00:23:40.524 "data_offset": 0, 00:23:40.524 "data_size": 0 00:23:40.524 }, 00:23:40.524 { 00:23:40.524 "name": null, 00:23:40.524 "uuid": "7ad0556d-f85e-487b-afc7-8c95700a6634", 00:23:40.524 "is_configured": false, 00:23:40.524 "data_offset": 0, 00:23:40.524 "data_size": 63488 00:23:40.524 }, 00:23:40.524 { 00:23:40.524 "name": "BaseBdev3", 00:23:40.524 "uuid": "0bfc39c4-993b-4d1e-af5e-4da46d8b6aeb", 00:23:40.524 "is_configured": true, 00:23:40.524 "data_offset": 2048, 00:23:40.524 "data_size": 63488 00:23:40.524 } 00:23:40.524 ] 00:23:40.524 }' 00:23:40.524 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:40.524 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:40.782 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:40.782 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.782 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.782 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:40.782 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.782 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:23:40.782 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:23:40.782 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.782 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:40.782 [2024-11-20 13:43:43.686444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:40.782 BaseBdev1 00:23:40.782 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.782 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:23:40.782 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:23:40.782 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:40.782 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:40.782 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:40.782 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:40.782 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:40.782 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.782 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.040 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.040 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:41.040 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.040 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.040 [ 00:23:41.040 { 00:23:41.040 "name": "BaseBdev1", 00:23:41.040 "aliases": [ 00:23:41.040 "7f8f1d5f-d0c5-4879-a00b-b43b4aa3e713" 00:23:41.040 ], 00:23:41.040 "product_name": "Malloc disk", 00:23:41.040 "block_size": 512, 00:23:41.040 "num_blocks": 65536, 00:23:41.040 "uuid": "7f8f1d5f-d0c5-4879-a00b-b43b4aa3e713", 00:23:41.040 "assigned_rate_limits": { 00:23:41.040 "rw_ios_per_sec": 0, 00:23:41.040 "rw_mbytes_per_sec": 0, 00:23:41.040 "r_mbytes_per_sec": 0, 00:23:41.040 "w_mbytes_per_sec": 0 00:23:41.040 }, 00:23:41.040 "claimed": true, 00:23:41.040 "claim_type": "exclusive_write", 00:23:41.040 "zoned": false, 00:23:41.040 "supported_io_types": { 00:23:41.040 "read": true, 00:23:41.040 "write": true, 00:23:41.040 "unmap": true, 00:23:41.040 "flush": true, 00:23:41.040 "reset": true, 00:23:41.040 "nvme_admin": false, 00:23:41.040 "nvme_io": false, 00:23:41.040 "nvme_io_md": false, 00:23:41.040 "write_zeroes": true, 00:23:41.040 "zcopy": true, 00:23:41.040 "get_zone_info": false, 00:23:41.040 "zone_management": false, 00:23:41.040 "zone_append": false, 00:23:41.040 "compare": false, 00:23:41.040 "compare_and_write": false, 00:23:41.040 "abort": true, 00:23:41.040 "seek_hole": false, 00:23:41.040 "seek_data": false, 00:23:41.040 "copy": true, 00:23:41.040 "nvme_iov_md": false 00:23:41.040 }, 00:23:41.040 "memory_domains": [ 00:23:41.040 { 00:23:41.040 "dma_device_id": "system", 00:23:41.040 "dma_device_type": 1 00:23:41.040 }, 00:23:41.040 { 00:23:41.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:41.040 "dma_device_type": 2 00:23:41.040 } 00:23:41.040 ], 00:23:41.040 "driver_specific": {} 00:23:41.040 } 00:23:41.040 ] 00:23:41.040 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.040 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:41.040 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:41.041 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:41.041 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:41.041 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:41.041 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:41.041 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:41.041 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:41.041 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:41.041 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:41.041 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:41.041 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.041 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.041 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:41.041 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.041 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.041 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:41.041 "name": "Existed_Raid", 00:23:41.041 "uuid": "552375c1-d310-4f9f-b150-e6d525fc41f5", 00:23:41.041 "strip_size_kb": 0, 00:23:41.041 "state": "configuring", 00:23:41.041 "raid_level": "raid1", 00:23:41.041 "superblock": true, 00:23:41.041 "num_base_bdevs": 3, 00:23:41.041 "num_base_bdevs_discovered": 2, 00:23:41.041 "num_base_bdevs_operational": 3, 00:23:41.041 "base_bdevs_list": [ 00:23:41.041 { 00:23:41.041 "name": "BaseBdev1", 00:23:41.041 "uuid": "7f8f1d5f-d0c5-4879-a00b-b43b4aa3e713", 00:23:41.041 "is_configured": true, 00:23:41.041 "data_offset": 2048, 00:23:41.041 "data_size": 63488 00:23:41.041 }, 00:23:41.041 { 00:23:41.041 "name": null, 00:23:41.041 "uuid": "7ad0556d-f85e-487b-afc7-8c95700a6634", 00:23:41.041 "is_configured": false, 00:23:41.041 "data_offset": 0, 00:23:41.041 "data_size": 63488 00:23:41.041 }, 00:23:41.041 { 00:23:41.041 "name": "BaseBdev3", 00:23:41.041 "uuid": "0bfc39c4-993b-4d1e-af5e-4da46d8b6aeb", 00:23:41.041 "is_configured": true, 00:23:41.041 "data_offset": 2048, 00:23:41.041 "data_size": 63488 00:23:41.041 } 00:23:41.041 ] 00:23:41.041 }' 00:23:41.041 13:43:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:41.041 13:43:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.607 [2024-11-20 13:43:44.302692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.607 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:41.607 "name": "Existed_Raid", 00:23:41.607 "uuid": "552375c1-d310-4f9f-b150-e6d525fc41f5", 00:23:41.607 "strip_size_kb": 0, 00:23:41.607 "state": "configuring", 00:23:41.607 "raid_level": "raid1", 00:23:41.607 "superblock": true, 00:23:41.607 "num_base_bdevs": 3, 00:23:41.608 "num_base_bdevs_discovered": 1, 00:23:41.608 "num_base_bdevs_operational": 3, 00:23:41.608 "base_bdevs_list": [ 00:23:41.608 { 00:23:41.608 "name": "BaseBdev1", 00:23:41.608 "uuid": "7f8f1d5f-d0c5-4879-a00b-b43b4aa3e713", 00:23:41.608 "is_configured": true, 00:23:41.608 "data_offset": 2048, 00:23:41.608 "data_size": 63488 00:23:41.608 }, 00:23:41.608 { 00:23:41.608 "name": null, 00:23:41.608 "uuid": "7ad0556d-f85e-487b-afc7-8c95700a6634", 00:23:41.608 "is_configured": false, 00:23:41.608 "data_offset": 0, 00:23:41.608 "data_size": 63488 00:23:41.608 }, 00:23:41.608 { 00:23:41.608 "name": null, 00:23:41.608 "uuid": "0bfc39c4-993b-4d1e-af5e-4da46d8b6aeb", 00:23:41.608 "is_configured": false, 00:23:41.608 "data_offset": 0, 00:23:41.608 "data_size": 63488 00:23:41.608 } 00:23:41.608 ] 00:23:41.608 }' 00:23:41.608 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:41.608 13:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.175 [2024-11-20 13:43:44.882976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:42.175 "name": "Existed_Raid", 00:23:42.175 "uuid": "552375c1-d310-4f9f-b150-e6d525fc41f5", 00:23:42.175 "strip_size_kb": 0, 00:23:42.175 "state": "configuring", 00:23:42.175 "raid_level": "raid1", 00:23:42.175 "superblock": true, 00:23:42.175 "num_base_bdevs": 3, 00:23:42.175 "num_base_bdevs_discovered": 2, 00:23:42.175 "num_base_bdevs_operational": 3, 00:23:42.175 "base_bdevs_list": [ 00:23:42.175 { 00:23:42.175 "name": "BaseBdev1", 00:23:42.175 "uuid": "7f8f1d5f-d0c5-4879-a00b-b43b4aa3e713", 00:23:42.175 "is_configured": true, 00:23:42.175 "data_offset": 2048, 00:23:42.175 "data_size": 63488 00:23:42.175 }, 00:23:42.175 { 00:23:42.175 "name": null, 00:23:42.175 "uuid": "7ad0556d-f85e-487b-afc7-8c95700a6634", 00:23:42.175 "is_configured": false, 00:23:42.175 "data_offset": 0, 00:23:42.175 "data_size": 63488 00:23:42.175 }, 00:23:42.175 { 00:23:42.175 "name": "BaseBdev3", 00:23:42.175 "uuid": "0bfc39c4-993b-4d1e-af5e-4da46d8b6aeb", 00:23:42.175 "is_configured": true, 00:23:42.175 "data_offset": 2048, 00:23:42.175 "data_size": 63488 00:23:42.175 } 00:23:42.175 ] 00:23:42.175 }' 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:42.175 13:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.742 [2024-11-20 13:43:45.435082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.742 13:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:42.743 "name": "Existed_Raid", 00:23:42.743 "uuid": "552375c1-d310-4f9f-b150-e6d525fc41f5", 00:23:42.743 "strip_size_kb": 0, 00:23:42.743 "state": "configuring", 00:23:42.743 "raid_level": "raid1", 00:23:42.743 "superblock": true, 00:23:42.743 "num_base_bdevs": 3, 00:23:42.743 "num_base_bdevs_discovered": 1, 00:23:42.743 "num_base_bdevs_operational": 3, 00:23:42.743 "base_bdevs_list": [ 00:23:42.743 { 00:23:42.743 "name": null, 00:23:42.743 "uuid": "7f8f1d5f-d0c5-4879-a00b-b43b4aa3e713", 00:23:42.743 "is_configured": false, 00:23:42.743 "data_offset": 0, 00:23:42.743 "data_size": 63488 00:23:42.743 }, 00:23:42.743 { 00:23:42.743 "name": null, 00:23:42.743 "uuid": "7ad0556d-f85e-487b-afc7-8c95700a6634", 00:23:42.743 "is_configured": false, 00:23:42.743 "data_offset": 0, 00:23:42.743 "data_size": 63488 00:23:42.743 }, 00:23:42.743 { 00:23:42.743 "name": "BaseBdev3", 00:23:42.743 "uuid": "0bfc39c4-993b-4d1e-af5e-4da46d8b6aeb", 00:23:42.743 "is_configured": true, 00:23:42.743 "data_offset": 2048, 00:23:42.743 "data_size": 63488 00:23:42.743 } 00:23:42.743 ] 00:23:42.743 }' 00:23:42.743 13:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:42.743 13:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.309 [2024-11-20 13:43:46.127684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:43.309 "name": "Existed_Raid", 00:23:43.309 "uuid": "552375c1-d310-4f9f-b150-e6d525fc41f5", 00:23:43.309 "strip_size_kb": 0, 00:23:43.309 "state": "configuring", 00:23:43.309 "raid_level": "raid1", 00:23:43.309 "superblock": true, 00:23:43.309 "num_base_bdevs": 3, 00:23:43.309 "num_base_bdevs_discovered": 2, 00:23:43.309 "num_base_bdevs_operational": 3, 00:23:43.309 "base_bdevs_list": [ 00:23:43.309 { 00:23:43.309 "name": null, 00:23:43.309 "uuid": "7f8f1d5f-d0c5-4879-a00b-b43b4aa3e713", 00:23:43.309 "is_configured": false, 00:23:43.309 "data_offset": 0, 00:23:43.309 "data_size": 63488 00:23:43.309 }, 00:23:43.309 { 00:23:43.309 "name": "BaseBdev2", 00:23:43.309 "uuid": "7ad0556d-f85e-487b-afc7-8c95700a6634", 00:23:43.309 "is_configured": true, 00:23:43.309 "data_offset": 2048, 00:23:43.309 "data_size": 63488 00:23:43.309 }, 00:23:43.309 { 00:23:43.309 "name": "BaseBdev3", 00:23:43.309 "uuid": "0bfc39c4-993b-4d1e-af5e-4da46d8b6aeb", 00:23:43.309 "is_configured": true, 00:23:43.309 "data_offset": 2048, 00:23:43.309 "data_size": 63488 00:23:43.309 } 00:23:43.309 ] 00:23:43.309 }' 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:43.309 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.877 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:43.877 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.877 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.877 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.877 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.877 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:23:43.877 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:43.877 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.877 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.877 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:43.877 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.877 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7f8f1d5f-d0c5-4879-a00b-b43b4aa3e713 00:23:43.877 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.877 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.136 [2024-11-20 13:43:46.818380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:44.136 [2024-11-20 13:43:46.818684] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:44.136 [2024-11-20 13:43:46.818708] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:44.136 NewBaseBdev 00:23:44.136 [2024-11-20 13:43:46.819056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:44.136 [2024-11-20 13:43:46.819248] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:44.136 [2024-11-20 13:43:46.819270] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:23:44.136 [2024-11-20 13:43:46.819450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.136 [ 00:23:44.136 { 00:23:44.136 "name": "NewBaseBdev", 00:23:44.136 "aliases": [ 00:23:44.136 "7f8f1d5f-d0c5-4879-a00b-b43b4aa3e713" 00:23:44.136 ], 00:23:44.136 "product_name": "Malloc disk", 00:23:44.136 "block_size": 512, 00:23:44.136 "num_blocks": 65536, 00:23:44.136 "uuid": "7f8f1d5f-d0c5-4879-a00b-b43b4aa3e713", 00:23:44.136 "assigned_rate_limits": { 00:23:44.136 "rw_ios_per_sec": 0, 00:23:44.136 "rw_mbytes_per_sec": 0, 00:23:44.136 "r_mbytes_per_sec": 0, 00:23:44.136 "w_mbytes_per_sec": 0 00:23:44.136 }, 00:23:44.136 "claimed": true, 00:23:44.136 "claim_type": "exclusive_write", 00:23:44.136 "zoned": false, 00:23:44.136 "supported_io_types": { 00:23:44.136 "read": true, 00:23:44.136 "write": true, 00:23:44.136 "unmap": true, 00:23:44.136 "flush": true, 00:23:44.136 "reset": true, 00:23:44.136 "nvme_admin": false, 00:23:44.136 "nvme_io": false, 00:23:44.136 "nvme_io_md": false, 00:23:44.136 "write_zeroes": true, 00:23:44.136 "zcopy": true, 00:23:44.136 "get_zone_info": false, 00:23:44.136 "zone_management": false, 00:23:44.136 "zone_append": false, 00:23:44.136 "compare": false, 00:23:44.136 "compare_and_write": false, 00:23:44.136 "abort": true, 00:23:44.136 "seek_hole": false, 00:23:44.136 "seek_data": false, 00:23:44.136 "copy": true, 00:23:44.136 "nvme_iov_md": false 00:23:44.136 }, 00:23:44.136 "memory_domains": [ 00:23:44.136 { 00:23:44.136 "dma_device_id": "system", 00:23:44.136 "dma_device_type": 1 00:23:44.136 }, 00:23:44.136 { 00:23:44.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.136 "dma_device_type": 2 00:23:44.136 } 00:23:44.136 ], 00:23:44.136 "driver_specific": {} 00:23:44.136 } 00:23:44.136 ] 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:44.136 "name": "Existed_Raid", 00:23:44.136 "uuid": "552375c1-d310-4f9f-b150-e6d525fc41f5", 00:23:44.136 "strip_size_kb": 0, 00:23:44.136 "state": "online", 00:23:44.136 "raid_level": "raid1", 00:23:44.136 "superblock": true, 00:23:44.136 "num_base_bdevs": 3, 00:23:44.136 "num_base_bdevs_discovered": 3, 00:23:44.136 "num_base_bdevs_operational": 3, 00:23:44.136 "base_bdevs_list": [ 00:23:44.136 { 00:23:44.136 "name": "NewBaseBdev", 00:23:44.136 "uuid": "7f8f1d5f-d0c5-4879-a00b-b43b4aa3e713", 00:23:44.136 "is_configured": true, 00:23:44.136 "data_offset": 2048, 00:23:44.136 "data_size": 63488 00:23:44.136 }, 00:23:44.136 { 00:23:44.136 "name": "BaseBdev2", 00:23:44.136 "uuid": "7ad0556d-f85e-487b-afc7-8c95700a6634", 00:23:44.136 "is_configured": true, 00:23:44.136 "data_offset": 2048, 00:23:44.136 "data_size": 63488 00:23:44.136 }, 00:23:44.136 { 00:23:44.136 "name": "BaseBdev3", 00:23:44.136 "uuid": "0bfc39c4-993b-4d1e-af5e-4da46d8b6aeb", 00:23:44.136 "is_configured": true, 00:23:44.136 "data_offset": 2048, 00:23:44.136 "data_size": 63488 00:23:44.136 } 00:23:44.136 ] 00:23:44.136 }' 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:44.136 13:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.704 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:23:44.704 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:44.704 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:44.704 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:44.704 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:23:44.704 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:44.704 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:44.704 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:44.704 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.704 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.704 [2024-11-20 13:43:47.395033] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:44.704 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.704 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:44.704 "name": "Existed_Raid", 00:23:44.704 "aliases": [ 00:23:44.704 "552375c1-d310-4f9f-b150-e6d525fc41f5" 00:23:44.704 ], 00:23:44.704 "product_name": "Raid Volume", 00:23:44.704 "block_size": 512, 00:23:44.704 "num_blocks": 63488, 00:23:44.704 "uuid": "552375c1-d310-4f9f-b150-e6d525fc41f5", 00:23:44.704 "assigned_rate_limits": { 00:23:44.704 "rw_ios_per_sec": 0, 00:23:44.704 "rw_mbytes_per_sec": 0, 00:23:44.704 "r_mbytes_per_sec": 0, 00:23:44.704 "w_mbytes_per_sec": 0 00:23:44.704 }, 00:23:44.704 "claimed": false, 00:23:44.704 "zoned": false, 00:23:44.704 "supported_io_types": { 00:23:44.704 "read": true, 00:23:44.704 "write": true, 00:23:44.704 "unmap": false, 00:23:44.704 "flush": false, 00:23:44.704 "reset": true, 00:23:44.704 "nvme_admin": false, 00:23:44.704 "nvme_io": false, 00:23:44.704 "nvme_io_md": false, 00:23:44.704 "write_zeroes": true, 00:23:44.704 "zcopy": false, 00:23:44.704 "get_zone_info": false, 00:23:44.704 "zone_management": false, 00:23:44.704 "zone_append": false, 00:23:44.704 "compare": false, 00:23:44.704 "compare_and_write": false, 00:23:44.704 "abort": false, 00:23:44.704 "seek_hole": false, 00:23:44.704 "seek_data": false, 00:23:44.704 "copy": false, 00:23:44.704 "nvme_iov_md": false 00:23:44.704 }, 00:23:44.704 "memory_domains": [ 00:23:44.704 { 00:23:44.704 "dma_device_id": "system", 00:23:44.704 "dma_device_type": 1 00:23:44.704 }, 00:23:44.704 { 00:23:44.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.704 "dma_device_type": 2 00:23:44.704 }, 00:23:44.704 { 00:23:44.704 "dma_device_id": "system", 00:23:44.704 "dma_device_type": 1 00:23:44.704 }, 00:23:44.704 { 00:23:44.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.704 "dma_device_type": 2 00:23:44.704 }, 00:23:44.704 { 00:23:44.704 "dma_device_id": "system", 00:23:44.704 "dma_device_type": 1 00:23:44.704 }, 00:23:44.704 { 00:23:44.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.704 "dma_device_type": 2 00:23:44.704 } 00:23:44.704 ], 00:23:44.704 "driver_specific": { 00:23:44.704 "raid": { 00:23:44.704 "uuid": "552375c1-d310-4f9f-b150-e6d525fc41f5", 00:23:44.704 "strip_size_kb": 0, 00:23:44.704 "state": "online", 00:23:44.704 "raid_level": "raid1", 00:23:44.704 "superblock": true, 00:23:44.704 "num_base_bdevs": 3, 00:23:44.704 "num_base_bdevs_discovered": 3, 00:23:44.704 "num_base_bdevs_operational": 3, 00:23:44.704 "base_bdevs_list": [ 00:23:44.704 { 00:23:44.704 "name": "NewBaseBdev", 00:23:44.704 "uuid": "7f8f1d5f-d0c5-4879-a00b-b43b4aa3e713", 00:23:44.704 "is_configured": true, 00:23:44.704 "data_offset": 2048, 00:23:44.704 "data_size": 63488 00:23:44.704 }, 00:23:44.704 { 00:23:44.704 "name": "BaseBdev2", 00:23:44.704 "uuid": "7ad0556d-f85e-487b-afc7-8c95700a6634", 00:23:44.704 "is_configured": true, 00:23:44.704 "data_offset": 2048, 00:23:44.704 "data_size": 63488 00:23:44.704 }, 00:23:44.704 { 00:23:44.704 "name": "BaseBdev3", 00:23:44.704 "uuid": "0bfc39c4-993b-4d1e-af5e-4da46d8b6aeb", 00:23:44.704 "is_configured": true, 00:23:44.704 "data_offset": 2048, 00:23:44.704 "data_size": 63488 00:23:44.704 } 00:23:44.704 ] 00:23:44.704 } 00:23:44.704 } 00:23:44.704 }' 00:23:44.704 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:44.704 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:23:44.704 BaseBdev2 00:23:44.704 BaseBdev3' 00:23:44.704 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:44.704 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:44.704 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:44.704 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:44.704 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:23:44.704 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.704 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.704 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.705 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:44.705 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:44.705 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:44.705 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:44.705 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.705 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.705 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:44.964 [2024-11-20 13:43:47.706677] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:44.964 [2024-11-20 13:43:47.706720] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:44.964 [2024-11-20 13:43:47.706828] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:44.964 [2024-11-20 13:43:47.707260] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:44.964 [2024-11-20 13:43:47.707280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68263 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68263 ']' 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68263 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68263 00:23:44.964 killing process with pid 68263 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68263' 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68263 00:23:44.964 [2024-11-20 13:43:47.744954] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:44.964 13:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68263 00:23:45.222 [2024-11-20 13:43:48.023838] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:46.599 ************************************ 00:23:46.599 END TEST raid_state_function_test_sb 00:23:46.599 ************************************ 00:23:46.599 13:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:23:46.599 00:23:46.599 real 0m11.962s 00:23:46.599 user 0m19.762s 00:23:46.599 sys 0m1.641s 00:23:46.599 13:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:46.599 13:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.599 13:43:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:23:46.599 13:43:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:46.599 13:43:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:46.599 13:43:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:46.599 ************************************ 00:23:46.599 START TEST raid_superblock_test 00:23:46.599 ************************************ 00:23:46.599 13:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:23:46.599 13:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:23:46.599 13:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:23:46.599 13:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:46.599 13:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:46.599 13:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:46.599 13:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:46.599 13:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:46.600 13:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:46.600 13:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:46.600 13:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:46.600 13:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:46.600 13:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:46.600 13:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:46.600 13:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:23:46.600 13:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:23:46.600 13:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68900 00:23:46.600 13:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:46.600 13:43:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68900 00:23:46.600 13:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68900 ']' 00:23:46.600 13:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.600 13:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.600 13:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.600 13:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.600 13:43:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:46.600 [2024-11-20 13:43:49.262488] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:23:46.600 [2024-11-20 13:43:49.262956] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68900 ] 00:23:46.600 [2024-11-20 13:43:49.435778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.858 [2024-11-20 13:43:49.570964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.116 [2024-11-20 13:43:49.776797] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:47.116 [2024-11-20 13:43:49.776861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:47.374 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:47.374 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:23:47.375 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:47.375 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:47.375 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:47.375 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:47.375 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:47.375 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:47.375 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:47.375 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:47.375 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:23:47.375 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.375 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.635 malloc1 00:23:47.635 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.635 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:47.635 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.635 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.635 [2024-11-20 13:43:50.327075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:47.635 [2024-11-20 13:43:50.327312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:47.635 [2024-11-20 13:43:50.327358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:47.635 [2024-11-20 13:43:50.327376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:47.635 [2024-11-20 13:43:50.330248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:47.635 [2024-11-20 13:43:50.330295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:47.635 pt1 00:23:47.635 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.635 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:47.635 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:47.635 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:47.635 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:47.635 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:47.635 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:47.635 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:47.635 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:47.635 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.636 malloc2 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.636 [2024-11-20 13:43:50.379433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:47.636 [2024-11-20 13:43:50.379508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:47.636 [2024-11-20 13:43:50.379549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:47.636 [2024-11-20 13:43:50.379564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:47.636 [2024-11-20 13:43:50.382554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:47.636 [2024-11-20 13:43:50.382602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:47.636 pt2 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.636 malloc3 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.636 [2024-11-20 13:43:50.446137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:47.636 [2024-11-20 13:43:50.446213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:47.636 [2024-11-20 13:43:50.446248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:47.636 [2024-11-20 13:43:50.446264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:47.636 [2024-11-20 13:43:50.449140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:47.636 [2024-11-20 13:43:50.449324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:47.636 pt3 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.636 [2024-11-20 13:43:50.454220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:47.636 [2024-11-20 13:43:50.456650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:47.636 [2024-11-20 13:43:50.456756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:47.636 [2024-11-20 13:43:50.457018] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:47.636 [2024-11-20 13:43:50.457049] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:47.636 [2024-11-20 13:43:50.457370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:47.636 [2024-11-20 13:43:50.457609] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:47.636 [2024-11-20 13:43:50.457631] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:47.636 [2024-11-20 13:43:50.457831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:47.636 "name": "raid_bdev1", 00:23:47.636 "uuid": "e2bc4ea9-6c85-40af-ab63-f9ff546beb3c", 00:23:47.636 "strip_size_kb": 0, 00:23:47.636 "state": "online", 00:23:47.636 "raid_level": "raid1", 00:23:47.636 "superblock": true, 00:23:47.636 "num_base_bdevs": 3, 00:23:47.636 "num_base_bdevs_discovered": 3, 00:23:47.636 "num_base_bdevs_operational": 3, 00:23:47.636 "base_bdevs_list": [ 00:23:47.636 { 00:23:47.636 "name": "pt1", 00:23:47.636 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:47.636 "is_configured": true, 00:23:47.636 "data_offset": 2048, 00:23:47.636 "data_size": 63488 00:23:47.636 }, 00:23:47.636 { 00:23:47.636 "name": "pt2", 00:23:47.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:47.636 "is_configured": true, 00:23:47.636 "data_offset": 2048, 00:23:47.636 "data_size": 63488 00:23:47.636 }, 00:23:47.636 { 00:23:47.636 "name": "pt3", 00:23:47.636 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:47.636 "is_configured": true, 00:23:47.636 "data_offset": 2048, 00:23:47.636 "data_size": 63488 00:23:47.636 } 00:23:47.636 ] 00:23:47.636 }' 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:47.636 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.203 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:48.203 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:48.203 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:48.203 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:48.203 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:48.203 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:48.203 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:48.203 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.203 13:43:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:48.203 13:43:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.203 [2024-11-20 13:43:50.986711] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:48.203 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.203 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:48.203 "name": "raid_bdev1", 00:23:48.203 "aliases": [ 00:23:48.203 "e2bc4ea9-6c85-40af-ab63-f9ff546beb3c" 00:23:48.203 ], 00:23:48.203 "product_name": "Raid Volume", 00:23:48.203 "block_size": 512, 00:23:48.203 "num_blocks": 63488, 00:23:48.203 "uuid": "e2bc4ea9-6c85-40af-ab63-f9ff546beb3c", 00:23:48.203 "assigned_rate_limits": { 00:23:48.203 "rw_ios_per_sec": 0, 00:23:48.203 "rw_mbytes_per_sec": 0, 00:23:48.203 "r_mbytes_per_sec": 0, 00:23:48.203 "w_mbytes_per_sec": 0 00:23:48.203 }, 00:23:48.203 "claimed": false, 00:23:48.203 "zoned": false, 00:23:48.204 "supported_io_types": { 00:23:48.204 "read": true, 00:23:48.204 "write": true, 00:23:48.204 "unmap": false, 00:23:48.204 "flush": false, 00:23:48.204 "reset": true, 00:23:48.204 "nvme_admin": false, 00:23:48.204 "nvme_io": false, 00:23:48.204 "nvme_io_md": false, 00:23:48.204 "write_zeroes": true, 00:23:48.204 "zcopy": false, 00:23:48.204 "get_zone_info": false, 00:23:48.204 "zone_management": false, 00:23:48.204 "zone_append": false, 00:23:48.204 "compare": false, 00:23:48.204 "compare_and_write": false, 00:23:48.204 "abort": false, 00:23:48.204 "seek_hole": false, 00:23:48.204 "seek_data": false, 00:23:48.204 "copy": false, 00:23:48.204 "nvme_iov_md": false 00:23:48.204 }, 00:23:48.204 "memory_domains": [ 00:23:48.204 { 00:23:48.204 "dma_device_id": "system", 00:23:48.204 "dma_device_type": 1 00:23:48.204 }, 00:23:48.204 { 00:23:48.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:48.204 "dma_device_type": 2 00:23:48.204 }, 00:23:48.204 { 00:23:48.204 "dma_device_id": "system", 00:23:48.204 "dma_device_type": 1 00:23:48.204 }, 00:23:48.204 { 00:23:48.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:48.204 "dma_device_type": 2 00:23:48.204 }, 00:23:48.204 { 00:23:48.204 "dma_device_id": "system", 00:23:48.204 "dma_device_type": 1 00:23:48.204 }, 00:23:48.204 { 00:23:48.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:48.204 "dma_device_type": 2 00:23:48.204 } 00:23:48.204 ], 00:23:48.204 "driver_specific": { 00:23:48.204 "raid": { 00:23:48.204 "uuid": "e2bc4ea9-6c85-40af-ab63-f9ff546beb3c", 00:23:48.204 "strip_size_kb": 0, 00:23:48.204 "state": "online", 00:23:48.204 "raid_level": "raid1", 00:23:48.204 "superblock": true, 00:23:48.204 "num_base_bdevs": 3, 00:23:48.204 "num_base_bdevs_discovered": 3, 00:23:48.204 "num_base_bdevs_operational": 3, 00:23:48.204 "base_bdevs_list": [ 00:23:48.204 { 00:23:48.204 "name": "pt1", 00:23:48.204 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:48.204 "is_configured": true, 00:23:48.204 "data_offset": 2048, 00:23:48.204 "data_size": 63488 00:23:48.204 }, 00:23:48.204 { 00:23:48.204 "name": "pt2", 00:23:48.204 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:48.204 "is_configured": true, 00:23:48.204 "data_offset": 2048, 00:23:48.204 "data_size": 63488 00:23:48.204 }, 00:23:48.204 { 00:23:48.204 "name": "pt3", 00:23:48.204 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:48.204 "is_configured": true, 00:23:48.204 "data_offset": 2048, 00:23:48.204 "data_size": 63488 00:23:48.204 } 00:23:48.204 ] 00:23:48.204 } 00:23:48.204 } 00:23:48.204 }' 00:23:48.204 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:48.204 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:48.204 pt2 00:23:48.204 pt3' 00:23:48.204 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.465 [2024-11-20 13:43:51.298747] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e2bc4ea9-6c85-40af-ab63-f9ff546beb3c 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e2bc4ea9-6c85-40af-ab63-f9ff546beb3c ']' 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.465 [2024-11-20 13:43:51.354408] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:48.465 [2024-11-20 13:43:51.354448] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:48.465 [2024-11-20 13:43:51.354555] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:48.465 [2024-11-20 13:43:51.354658] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:48.465 [2024-11-20 13:43:51.354676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.465 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.725 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:48.725 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:48.725 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:48.725 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:23:48.725 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.725 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.725 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.725 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:48.725 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:23:48.725 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.725 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.725 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.726 [2024-11-20 13:43:51.498513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:48.726 [2024-11-20 13:43:51.501176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:48.726 [2024-11-20 13:43:51.501262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:48.726 [2024-11-20 13:43:51.501342] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:48.726 [2024-11-20 13:43:51.501434] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:48.726 [2024-11-20 13:43:51.501470] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:48.726 [2024-11-20 13:43:51.501499] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:48.726 [2024-11-20 13:43:51.501514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:48.726 request: 00:23:48.726 { 00:23:48.726 "name": "raid_bdev1", 00:23:48.726 "raid_level": "raid1", 00:23:48.726 "base_bdevs": [ 00:23:48.726 "malloc1", 00:23:48.726 "malloc2", 00:23:48.726 "malloc3" 00:23:48.726 ], 00:23:48.726 "superblock": false, 00:23:48.726 "method": "bdev_raid_create", 00:23:48.726 "req_id": 1 00:23:48.726 } 00:23:48.726 Got JSON-RPC error response 00:23:48.726 response: 00:23:48.726 { 00:23:48.726 "code": -17, 00:23:48.726 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:48.726 } 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.726 [2024-11-20 13:43:51.562528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:48.726 [2024-11-20 13:43:51.562746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:48.726 [2024-11-20 13:43:51.562824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:48.726 [2024-11-20 13:43:51.563043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:48.726 [2024-11-20 13:43:51.566122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:48.726 [2024-11-20 13:43:51.566282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:48.726 [2024-11-20 13:43:51.566504] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:48.726 [2024-11-20 13:43:51.566698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:48.726 pt1 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:48.726 "name": "raid_bdev1", 00:23:48.726 "uuid": "e2bc4ea9-6c85-40af-ab63-f9ff546beb3c", 00:23:48.726 "strip_size_kb": 0, 00:23:48.726 "state": "configuring", 00:23:48.726 "raid_level": "raid1", 00:23:48.726 "superblock": true, 00:23:48.726 "num_base_bdevs": 3, 00:23:48.726 "num_base_bdevs_discovered": 1, 00:23:48.726 "num_base_bdevs_operational": 3, 00:23:48.726 "base_bdevs_list": [ 00:23:48.726 { 00:23:48.726 "name": "pt1", 00:23:48.726 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:48.726 "is_configured": true, 00:23:48.726 "data_offset": 2048, 00:23:48.726 "data_size": 63488 00:23:48.726 }, 00:23:48.726 { 00:23:48.726 "name": null, 00:23:48.726 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:48.726 "is_configured": false, 00:23:48.726 "data_offset": 2048, 00:23:48.726 "data_size": 63488 00:23:48.726 }, 00:23:48.726 { 00:23:48.726 "name": null, 00:23:48.726 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:48.726 "is_configured": false, 00:23:48.726 "data_offset": 2048, 00:23:48.726 "data_size": 63488 00:23:48.726 } 00:23:48.726 ] 00:23:48.726 }' 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:48.726 13:43:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.295 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:23:49.295 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:49.295 13:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.295 13:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.295 [2024-11-20 13:43:52.058776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:49.295 [2024-11-20 13:43:52.058858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:49.295 [2024-11-20 13:43:52.058914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:23:49.295 [2024-11-20 13:43:52.058952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:49.295 [2024-11-20 13:43:52.059531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:49.295 [2024-11-20 13:43:52.059575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:49.296 [2024-11-20 13:43:52.059692] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:49.296 [2024-11-20 13:43:52.059726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:49.296 pt2 00:23:49.296 13:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.296 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:23:49.296 13:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.296 13:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.296 [2024-11-20 13:43:52.066744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:49.296 13:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.296 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:23:49.296 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:49.296 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:49.296 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:49.296 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:49.296 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:49.296 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:49.296 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:49.296 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:49.296 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:49.296 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.296 13:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.296 13:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.296 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.296 13:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.296 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:49.296 "name": "raid_bdev1", 00:23:49.296 "uuid": "e2bc4ea9-6c85-40af-ab63-f9ff546beb3c", 00:23:49.296 "strip_size_kb": 0, 00:23:49.296 "state": "configuring", 00:23:49.296 "raid_level": "raid1", 00:23:49.296 "superblock": true, 00:23:49.296 "num_base_bdevs": 3, 00:23:49.296 "num_base_bdevs_discovered": 1, 00:23:49.296 "num_base_bdevs_operational": 3, 00:23:49.296 "base_bdevs_list": [ 00:23:49.296 { 00:23:49.296 "name": "pt1", 00:23:49.296 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:49.296 "is_configured": true, 00:23:49.296 "data_offset": 2048, 00:23:49.296 "data_size": 63488 00:23:49.296 }, 00:23:49.296 { 00:23:49.296 "name": null, 00:23:49.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:49.296 "is_configured": false, 00:23:49.296 "data_offset": 0, 00:23:49.296 "data_size": 63488 00:23:49.296 }, 00:23:49.296 { 00:23:49.296 "name": null, 00:23:49.296 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:49.296 "is_configured": false, 00:23:49.296 "data_offset": 2048, 00:23:49.296 "data_size": 63488 00:23:49.296 } 00:23:49.296 ] 00:23:49.296 }' 00:23:49.296 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:49.296 13:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.865 [2024-11-20 13:43:52.546879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:49.865 [2024-11-20 13:43:52.546998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:49.865 [2024-11-20 13:43:52.547029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:49.865 [2024-11-20 13:43:52.547047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:49.865 [2024-11-20 13:43:52.547929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:49.865 [2024-11-20 13:43:52.547967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:49.865 [2024-11-20 13:43:52.548080] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:49.865 [2024-11-20 13:43:52.548132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:49.865 pt2 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.865 [2024-11-20 13:43:52.558855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:49.865 [2024-11-20 13:43:52.558952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:49.865 [2024-11-20 13:43:52.558985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:49.865 [2024-11-20 13:43:52.559001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:49.865 [2024-11-20 13:43:52.559498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:49.865 [2024-11-20 13:43:52.559551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:49.865 [2024-11-20 13:43:52.559641] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:49.865 [2024-11-20 13:43:52.559675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:49.865 [2024-11-20 13:43:52.559846] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:49.865 [2024-11-20 13:43:52.559877] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:49.865 [2024-11-20 13:43:52.560201] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:49.865 [2024-11-20 13:43:52.560421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:49.865 [2024-11-20 13:43:52.560437] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:49.865 [2024-11-20 13:43:52.560609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:49.865 pt3 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:49.865 "name": "raid_bdev1", 00:23:49.865 "uuid": "e2bc4ea9-6c85-40af-ab63-f9ff546beb3c", 00:23:49.865 "strip_size_kb": 0, 00:23:49.865 "state": "online", 00:23:49.865 "raid_level": "raid1", 00:23:49.865 "superblock": true, 00:23:49.865 "num_base_bdevs": 3, 00:23:49.865 "num_base_bdevs_discovered": 3, 00:23:49.865 "num_base_bdevs_operational": 3, 00:23:49.865 "base_bdevs_list": [ 00:23:49.865 { 00:23:49.865 "name": "pt1", 00:23:49.865 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:49.865 "is_configured": true, 00:23:49.865 "data_offset": 2048, 00:23:49.865 "data_size": 63488 00:23:49.865 }, 00:23:49.865 { 00:23:49.865 "name": "pt2", 00:23:49.865 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:49.865 "is_configured": true, 00:23:49.865 "data_offset": 2048, 00:23:49.865 "data_size": 63488 00:23:49.865 }, 00:23:49.865 { 00:23:49.865 "name": "pt3", 00:23:49.865 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:49.865 "is_configured": true, 00:23:49.865 "data_offset": 2048, 00:23:49.865 "data_size": 63488 00:23:49.865 } 00:23:49.865 ] 00:23:49.865 }' 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:49.865 13:43:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.433 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:50.433 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:50.433 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:50.433 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:50.433 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:23:50.433 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:50.433 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:50.433 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.433 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.433 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:50.434 [2024-11-20 13:43:53.067456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:50.434 "name": "raid_bdev1", 00:23:50.434 "aliases": [ 00:23:50.434 "e2bc4ea9-6c85-40af-ab63-f9ff546beb3c" 00:23:50.434 ], 00:23:50.434 "product_name": "Raid Volume", 00:23:50.434 "block_size": 512, 00:23:50.434 "num_blocks": 63488, 00:23:50.434 "uuid": "e2bc4ea9-6c85-40af-ab63-f9ff546beb3c", 00:23:50.434 "assigned_rate_limits": { 00:23:50.434 "rw_ios_per_sec": 0, 00:23:50.434 "rw_mbytes_per_sec": 0, 00:23:50.434 "r_mbytes_per_sec": 0, 00:23:50.434 "w_mbytes_per_sec": 0 00:23:50.434 }, 00:23:50.434 "claimed": false, 00:23:50.434 "zoned": false, 00:23:50.434 "supported_io_types": { 00:23:50.434 "read": true, 00:23:50.434 "write": true, 00:23:50.434 "unmap": false, 00:23:50.434 "flush": false, 00:23:50.434 "reset": true, 00:23:50.434 "nvme_admin": false, 00:23:50.434 "nvme_io": false, 00:23:50.434 "nvme_io_md": false, 00:23:50.434 "write_zeroes": true, 00:23:50.434 "zcopy": false, 00:23:50.434 "get_zone_info": false, 00:23:50.434 "zone_management": false, 00:23:50.434 "zone_append": false, 00:23:50.434 "compare": false, 00:23:50.434 "compare_and_write": false, 00:23:50.434 "abort": false, 00:23:50.434 "seek_hole": false, 00:23:50.434 "seek_data": false, 00:23:50.434 "copy": false, 00:23:50.434 "nvme_iov_md": false 00:23:50.434 }, 00:23:50.434 "memory_domains": [ 00:23:50.434 { 00:23:50.434 "dma_device_id": "system", 00:23:50.434 "dma_device_type": 1 00:23:50.434 }, 00:23:50.434 { 00:23:50.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:50.434 "dma_device_type": 2 00:23:50.434 }, 00:23:50.434 { 00:23:50.434 "dma_device_id": "system", 00:23:50.434 "dma_device_type": 1 00:23:50.434 }, 00:23:50.434 { 00:23:50.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:50.434 "dma_device_type": 2 00:23:50.434 }, 00:23:50.434 { 00:23:50.434 "dma_device_id": "system", 00:23:50.434 "dma_device_type": 1 00:23:50.434 }, 00:23:50.434 { 00:23:50.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:50.434 "dma_device_type": 2 00:23:50.434 } 00:23:50.434 ], 00:23:50.434 "driver_specific": { 00:23:50.434 "raid": { 00:23:50.434 "uuid": "e2bc4ea9-6c85-40af-ab63-f9ff546beb3c", 00:23:50.434 "strip_size_kb": 0, 00:23:50.434 "state": "online", 00:23:50.434 "raid_level": "raid1", 00:23:50.434 "superblock": true, 00:23:50.434 "num_base_bdevs": 3, 00:23:50.434 "num_base_bdevs_discovered": 3, 00:23:50.434 "num_base_bdevs_operational": 3, 00:23:50.434 "base_bdevs_list": [ 00:23:50.434 { 00:23:50.434 "name": "pt1", 00:23:50.434 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:50.434 "is_configured": true, 00:23:50.434 "data_offset": 2048, 00:23:50.434 "data_size": 63488 00:23:50.434 }, 00:23:50.434 { 00:23:50.434 "name": "pt2", 00:23:50.434 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:50.434 "is_configured": true, 00:23:50.434 "data_offset": 2048, 00:23:50.434 "data_size": 63488 00:23:50.434 }, 00:23:50.434 { 00:23:50.434 "name": "pt3", 00:23:50.434 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:50.434 "is_configured": true, 00:23:50.434 "data_offset": 2048, 00:23:50.434 "data_size": 63488 00:23:50.434 } 00:23:50.434 ] 00:23:50.434 } 00:23:50.434 } 00:23:50.434 }' 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:50.434 pt2 00:23:50.434 pt3' 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.434 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.694 [2024-11-20 13:43:53.399566] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e2bc4ea9-6c85-40af-ab63-f9ff546beb3c '!=' e2bc4ea9-6c85-40af-ab63-f9ff546beb3c ']' 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.694 [2024-11-20 13:43:53.451300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.694 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:50.694 "name": "raid_bdev1", 00:23:50.694 "uuid": "e2bc4ea9-6c85-40af-ab63-f9ff546beb3c", 00:23:50.694 "strip_size_kb": 0, 00:23:50.694 "state": "online", 00:23:50.694 "raid_level": "raid1", 00:23:50.694 "superblock": true, 00:23:50.694 "num_base_bdevs": 3, 00:23:50.694 "num_base_bdevs_discovered": 2, 00:23:50.695 "num_base_bdevs_operational": 2, 00:23:50.695 "base_bdevs_list": [ 00:23:50.695 { 00:23:50.695 "name": null, 00:23:50.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.695 "is_configured": false, 00:23:50.695 "data_offset": 0, 00:23:50.695 "data_size": 63488 00:23:50.695 }, 00:23:50.695 { 00:23:50.695 "name": "pt2", 00:23:50.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:50.695 "is_configured": true, 00:23:50.695 "data_offset": 2048, 00:23:50.695 "data_size": 63488 00:23:50.695 }, 00:23:50.695 { 00:23:50.695 "name": "pt3", 00:23:50.695 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:50.695 "is_configured": true, 00:23:50.695 "data_offset": 2048, 00:23:50.695 "data_size": 63488 00:23:50.695 } 00:23:50.695 ] 00:23:50.695 }' 00:23:50.695 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:50.695 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.263 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:51.263 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.263 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.263 [2024-11-20 13:43:53.971371] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:51.263 [2024-11-20 13:43:53.971601] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:51.263 [2024-11-20 13:43:53.971728] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:51.263 [2024-11-20 13:43:53.971815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:51.263 [2024-11-20 13:43:53.971839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:51.263 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.263 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.263 13:43:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:51.263 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.264 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.264 13:43:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.264 [2024-11-20 13:43:54.043312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:51.264 [2024-11-20 13:43:54.043390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:51.264 [2024-11-20 13:43:54.043416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:23:51.264 [2024-11-20 13:43:54.043433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:51.264 [2024-11-20 13:43:54.046397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:51.264 [2024-11-20 13:43:54.046575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:51.264 [2024-11-20 13:43:54.046710] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:51.264 [2024-11-20 13:43:54.046791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:51.264 pt2 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:51.264 "name": "raid_bdev1", 00:23:51.264 "uuid": "e2bc4ea9-6c85-40af-ab63-f9ff546beb3c", 00:23:51.264 "strip_size_kb": 0, 00:23:51.264 "state": "configuring", 00:23:51.264 "raid_level": "raid1", 00:23:51.264 "superblock": true, 00:23:51.264 "num_base_bdevs": 3, 00:23:51.264 "num_base_bdevs_discovered": 1, 00:23:51.264 "num_base_bdevs_operational": 2, 00:23:51.264 "base_bdevs_list": [ 00:23:51.264 { 00:23:51.264 "name": null, 00:23:51.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.264 "is_configured": false, 00:23:51.264 "data_offset": 2048, 00:23:51.264 "data_size": 63488 00:23:51.264 }, 00:23:51.264 { 00:23:51.264 "name": "pt2", 00:23:51.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:51.264 "is_configured": true, 00:23:51.264 "data_offset": 2048, 00:23:51.264 "data_size": 63488 00:23:51.264 }, 00:23:51.264 { 00:23:51.264 "name": null, 00:23:51.264 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:51.264 "is_configured": false, 00:23:51.264 "data_offset": 2048, 00:23:51.264 "data_size": 63488 00:23:51.264 } 00:23:51.264 ] 00:23:51.264 }' 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:51.264 13:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.832 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:23:51.832 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:51.832 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:23:51.832 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:51.832 13:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.832 13:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.832 [2024-11-20 13:43:54.663541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:51.832 [2024-11-20 13:43:54.663641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:51.832 [2024-11-20 13:43:54.663671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:51.832 [2024-11-20 13:43:54.663689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:51.832 [2024-11-20 13:43:54.664319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:51.832 [2024-11-20 13:43:54.664352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:51.832 [2024-11-20 13:43:54.664470] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:51.832 [2024-11-20 13:43:54.664522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:51.832 [2024-11-20 13:43:54.664675] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:51.832 [2024-11-20 13:43:54.664704] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:51.832 [2024-11-20 13:43:54.665069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:51.832 [2024-11-20 13:43:54.665290] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:51.833 [2024-11-20 13:43:54.665308] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:51.833 [2024-11-20 13:43:54.665489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:51.833 pt3 00:23:51.833 13:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.833 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:51.833 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:51.833 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:51.833 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:51.833 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:51.833 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:51.833 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:51.833 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:51.833 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:51.833 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:51.833 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.833 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.833 13:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.833 13:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.833 13:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.833 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:51.833 "name": "raid_bdev1", 00:23:51.833 "uuid": "e2bc4ea9-6c85-40af-ab63-f9ff546beb3c", 00:23:51.833 "strip_size_kb": 0, 00:23:51.833 "state": "online", 00:23:51.833 "raid_level": "raid1", 00:23:51.833 "superblock": true, 00:23:51.833 "num_base_bdevs": 3, 00:23:51.833 "num_base_bdevs_discovered": 2, 00:23:51.833 "num_base_bdevs_operational": 2, 00:23:51.833 "base_bdevs_list": [ 00:23:51.833 { 00:23:51.833 "name": null, 00:23:51.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.833 "is_configured": false, 00:23:51.833 "data_offset": 2048, 00:23:51.833 "data_size": 63488 00:23:51.833 }, 00:23:51.833 { 00:23:51.833 "name": "pt2", 00:23:51.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:51.833 "is_configured": true, 00:23:51.833 "data_offset": 2048, 00:23:51.833 "data_size": 63488 00:23:51.833 }, 00:23:51.833 { 00:23:51.833 "name": "pt3", 00:23:51.833 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:51.833 "is_configured": true, 00:23:51.833 "data_offset": 2048, 00:23:51.833 "data_size": 63488 00:23:51.833 } 00:23:51.833 ] 00:23:51.833 }' 00:23:51.833 13:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:51.833 13:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.402 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:52.402 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.402 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.403 [2024-11-20 13:43:55.235673] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:52.403 [2024-11-20 13:43:55.235717] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:52.403 [2024-11-20 13:43:55.235817] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:52.403 [2024-11-20 13:43:55.235948] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:52.403 [2024-11-20 13:43:55.235967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:52.403 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.403 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:52.403 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.403 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:23:52.403 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.403 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.662 [2024-11-20 13:43:55.375829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:52.662 [2024-11-20 13:43:55.376105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:52.662 [2024-11-20 13:43:55.376192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:52.662 [2024-11-20 13:43:55.376374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:52.662 [2024-11-20 13:43:55.379612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:52.662 [2024-11-20 13:43:55.379657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:52.662 [2024-11-20 13:43:55.379817] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:52.662 [2024-11-20 13:43:55.379885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:52.662 [2024-11-20 13:43:55.380137] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:52.662 [2024-11-20 13:43:55.380157] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:52.662 [2024-11-20 13:43:55.380182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:23:52.662 [2024-11-20 13:43:55.380259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:52.662 pt1 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:52.662 "name": "raid_bdev1", 00:23:52.662 "uuid": "e2bc4ea9-6c85-40af-ab63-f9ff546beb3c", 00:23:52.662 "strip_size_kb": 0, 00:23:52.662 "state": "configuring", 00:23:52.662 "raid_level": "raid1", 00:23:52.662 "superblock": true, 00:23:52.662 "num_base_bdevs": 3, 00:23:52.662 "num_base_bdevs_discovered": 1, 00:23:52.662 "num_base_bdevs_operational": 2, 00:23:52.662 "base_bdevs_list": [ 00:23:52.662 { 00:23:52.662 "name": null, 00:23:52.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.662 "is_configured": false, 00:23:52.662 "data_offset": 2048, 00:23:52.662 "data_size": 63488 00:23:52.662 }, 00:23:52.662 { 00:23:52.662 "name": "pt2", 00:23:52.662 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:52.662 "is_configured": true, 00:23:52.662 "data_offset": 2048, 00:23:52.662 "data_size": 63488 00:23:52.662 }, 00:23:52.662 { 00:23:52.662 "name": null, 00:23:52.662 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:52.662 "is_configured": false, 00:23:52.662 "data_offset": 2048, 00:23:52.662 "data_size": 63488 00:23:52.662 } 00:23:52.662 ] 00:23:52.662 }' 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:52.662 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.230 [2024-11-20 13:43:55.960120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:53.230 [2024-11-20 13:43:55.960210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:53.230 [2024-11-20 13:43:55.960246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:53.230 [2024-11-20 13:43:55.960261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:53.230 [2024-11-20 13:43:55.960865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:53.230 [2024-11-20 13:43:55.960909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:53.230 [2024-11-20 13:43:55.961022] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:53.230 [2024-11-20 13:43:55.961056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:53.230 [2024-11-20 13:43:55.961225] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:23:53.230 [2024-11-20 13:43:55.961248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:53.230 [2024-11-20 13:43:55.961561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:53.230 [2024-11-20 13:43:55.961762] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:23:53.230 [2024-11-20 13:43:55.961795] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:23:53.230 [2024-11-20 13:43:55.961987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:53.230 pt3 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.230 13:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.230 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:53.230 "name": "raid_bdev1", 00:23:53.230 "uuid": "e2bc4ea9-6c85-40af-ab63-f9ff546beb3c", 00:23:53.230 "strip_size_kb": 0, 00:23:53.230 "state": "online", 00:23:53.230 "raid_level": "raid1", 00:23:53.230 "superblock": true, 00:23:53.230 "num_base_bdevs": 3, 00:23:53.230 "num_base_bdevs_discovered": 2, 00:23:53.230 "num_base_bdevs_operational": 2, 00:23:53.230 "base_bdevs_list": [ 00:23:53.230 { 00:23:53.230 "name": null, 00:23:53.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.230 "is_configured": false, 00:23:53.230 "data_offset": 2048, 00:23:53.230 "data_size": 63488 00:23:53.230 }, 00:23:53.230 { 00:23:53.230 "name": "pt2", 00:23:53.230 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:53.230 "is_configured": true, 00:23:53.230 "data_offset": 2048, 00:23:53.230 "data_size": 63488 00:23:53.230 }, 00:23:53.230 { 00:23:53.230 "name": "pt3", 00:23:53.230 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:53.230 "is_configured": true, 00:23:53.230 "data_offset": 2048, 00:23:53.230 "data_size": 63488 00:23:53.230 } 00:23:53.230 ] 00:23:53.230 }' 00:23:53.230 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:53.230 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.798 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:53.798 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.798 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:53.798 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.798 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.798 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:23:53.798 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:53.798 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.798 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:23:53.798 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.798 [2024-11-20 13:43:56.548774] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:53.798 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.798 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e2bc4ea9-6c85-40af-ab63-f9ff546beb3c '!=' e2bc4ea9-6c85-40af-ab63-f9ff546beb3c ']' 00:23:53.798 13:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68900 00:23:53.798 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68900 ']' 00:23:53.798 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68900 00:23:53.798 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:23:53.798 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.798 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68900 00:23:53.798 killing process with pid 68900 00:23:53.798 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:53.798 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:53.798 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68900' 00:23:53.798 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68900 00:23:53.798 [2024-11-20 13:43:56.626094] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:53.798 13:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68900 00:23:53.798 [2024-11-20 13:43:56.626233] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:53.798 [2024-11-20 13:43:56.626336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:53.798 [2024-11-20 13:43:56.626361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:23:54.056 [2024-11-20 13:43:56.943805] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:55.433 ************************************ 00:23:55.433 END TEST raid_superblock_test 00:23:55.433 ************************************ 00:23:55.433 13:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:23:55.433 00:23:55.433 real 0m8.844s 00:23:55.433 user 0m14.415s 00:23:55.433 sys 0m1.284s 00:23:55.433 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:55.433 13:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.433 13:43:58 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:23:55.433 13:43:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:55.433 13:43:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:55.433 13:43:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:55.433 ************************************ 00:23:55.433 START TEST raid_read_error_test 00:23:55.433 ************************************ 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BfwAgiDSxv 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69352 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69352 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69352 ']' 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.433 13:43:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.433 [2024-11-20 13:43:58.169606] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:23:55.433 [2024-11-20 13:43:58.170083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69352 ] 00:23:55.433 [2024-11-20 13:43:58.346876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.693 [2024-11-20 13:43:58.480308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.951 [2024-11-20 13:43:58.690024] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:55.951 [2024-11-20 13:43:58.690085] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.518 BaseBdev1_malloc 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.518 true 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.518 [2024-11-20 13:43:59.204074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:56.518 [2024-11-20 13:43:59.204163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.518 [2024-11-20 13:43:59.204224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:56.518 [2024-11-20 13:43:59.204257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.518 [2024-11-20 13:43:59.207389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.518 [2024-11-20 13:43:59.207443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:56.518 BaseBdev1 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.518 BaseBdev2_malloc 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.518 true 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.518 [2024-11-20 13:43:59.261638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:56.518 [2024-11-20 13:43:59.261719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.518 [2024-11-20 13:43:59.261749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:56.518 [2024-11-20 13:43:59.261767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.518 [2024-11-20 13:43:59.264865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.518 [2024-11-20 13:43:59.264964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:56.518 BaseBdev2 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.518 BaseBdev3_malloc 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.518 true 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.518 [2024-11-20 13:43:59.329070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:56.518 [2024-11-20 13:43:59.329149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.518 [2024-11-20 13:43:59.329195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:56.518 [2024-11-20 13:43:59.329217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.518 [2024-11-20 13:43:59.332286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.518 [2024-11-20 13:43:59.332341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:56.518 BaseBdev3 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.518 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.518 [2024-11-20 13:43:59.337438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:56.518 [2024-11-20 13:43:59.340057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:56.518 [2024-11-20 13:43:59.340182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:56.518 [2024-11-20 13:43:59.340498] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:56.519 [2024-11-20 13:43:59.340517] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:56.519 [2024-11-20 13:43:59.340866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:23:56.519 [2024-11-20 13:43:59.341153] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:56.519 [2024-11-20 13:43:59.341174] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:56.519 [2024-11-20 13:43:59.341454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:56.519 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.519 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:56.519 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:56.519 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:56.519 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:56.519 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:56.519 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:56.519 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:56.519 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:56.519 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:56.519 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:56.519 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:56.519 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.519 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.519 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.519 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.519 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:56.519 "name": "raid_bdev1", 00:23:56.519 "uuid": "359d472d-c7a8-4970-98ae-dc114fc867b2", 00:23:56.519 "strip_size_kb": 0, 00:23:56.519 "state": "online", 00:23:56.519 "raid_level": "raid1", 00:23:56.519 "superblock": true, 00:23:56.519 "num_base_bdevs": 3, 00:23:56.519 "num_base_bdevs_discovered": 3, 00:23:56.519 "num_base_bdevs_operational": 3, 00:23:56.519 "base_bdevs_list": [ 00:23:56.519 { 00:23:56.519 "name": "BaseBdev1", 00:23:56.519 "uuid": "4af54efc-7f81-5c74-8e3b-a40557c31cd4", 00:23:56.519 "is_configured": true, 00:23:56.519 "data_offset": 2048, 00:23:56.519 "data_size": 63488 00:23:56.519 }, 00:23:56.519 { 00:23:56.519 "name": "BaseBdev2", 00:23:56.519 "uuid": "f7957229-7332-5e63-be60-11c0f947283e", 00:23:56.519 "is_configured": true, 00:23:56.519 "data_offset": 2048, 00:23:56.519 "data_size": 63488 00:23:56.519 }, 00:23:56.519 { 00:23:56.519 "name": "BaseBdev3", 00:23:56.519 "uuid": "302264dc-4ad6-5594-b49d-e2c7e0ee0e7c", 00:23:56.519 "is_configured": true, 00:23:56.519 "data_offset": 2048, 00:23:56.519 "data_size": 63488 00:23:56.519 } 00:23:56.519 ] 00:23:56.519 }' 00:23:56.519 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:56.519 13:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.086 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:23:57.086 13:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:23:57.344 [2024-11-20 13:44:00.027047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:23:58.279 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:23:58.279 13:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.279 13:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.279 13:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.279 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:23:58.279 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:23:58.279 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:23:58.279 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:23:58.279 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:58.279 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:58.279 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:58.279 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:58.279 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:58.279 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:58.280 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:58.280 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:58.280 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:58.280 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:58.280 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:58.280 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.280 13:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.280 13:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.280 13:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.280 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:58.280 "name": "raid_bdev1", 00:23:58.280 "uuid": "359d472d-c7a8-4970-98ae-dc114fc867b2", 00:23:58.280 "strip_size_kb": 0, 00:23:58.280 "state": "online", 00:23:58.280 "raid_level": "raid1", 00:23:58.280 "superblock": true, 00:23:58.280 "num_base_bdevs": 3, 00:23:58.280 "num_base_bdevs_discovered": 3, 00:23:58.280 "num_base_bdevs_operational": 3, 00:23:58.280 "base_bdevs_list": [ 00:23:58.280 { 00:23:58.280 "name": "BaseBdev1", 00:23:58.280 "uuid": "4af54efc-7f81-5c74-8e3b-a40557c31cd4", 00:23:58.280 "is_configured": true, 00:23:58.280 "data_offset": 2048, 00:23:58.280 "data_size": 63488 00:23:58.280 }, 00:23:58.280 { 00:23:58.280 "name": "BaseBdev2", 00:23:58.280 "uuid": "f7957229-7332-5e63-be60-11c0f947283e", 00:23:58.280 "is_configured": true, 00:23:58.280 "data_offset": 2048, 00:23:58.280 "data_size": 63488 00:23:58.280 }, 00:23:58.280 { 00:23:58.280 "name": "BaseBdev3", 00:23:58.280 "uuid": "302264dc-4ad6-5594-b49d-e2c7e0ee0e7c", 00:23:58.280 "is_configured": true, 00:23:58.280 "data_offset": 2048, 00:23:58.280 "data_size": 63488 00:23:58.280 } 00:23:58.280 ] 00:23:58.280 }' 00:23:58.280 13:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:58.280 13:44:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.538 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:58.797 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.797 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.797 [2024-11-20 13:44:01.456072] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:58.797 [2024-11-20 13:44:01.456268] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:58.797 { 00:23:58.797 "results": [ 00:23:58.797 { 00:23:58.798 "job": "raid_bdev1", 00:23:58.798 "core_mask": "0x1", 00:23:58.798 "workload": "randrw", 00:23:58.798 "percentage": 50, 00:23:58.798 "status": "finished", 00:23:58.798 "queue_depth": 1, 00:23:58.798 "io_size": 131072, 00:23:58.798 "runtime": 1.426682, 00:23:58.798 "iops": 8152.482473319212, 00:23:58.798 "mibps": 1019.0603091649015, 00:23:58.798 "io_failed": 0, 00:23:58.798 "io_timeout": 0, 00:23:58.798 "avg_latency_us": 118.01252639888698, 00:23:58.798 "min_latency_us": 44.68363636363637, 00:23:58.798 "max_latency_us": 2368.232727272727 00:23:58.798 } 00:23:58.798 ], 00:23:58.798 "core_count": 1 00:23:58.798 } 00:23:58.798 [2024-11-20 13:44:01.460051] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:58.798 [2024-11-20 13:44:01.460235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:58.798 [2024-11-20 13:44:01.460394] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:58.798 [2024-11-20 13:44:01.460411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:58.798 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.798 13:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69352 00:23:58.798 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69352 ']' 00:23:58.798 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69352 00:23:58.798 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:23:58.798 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:58.798 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69352 00:23:58.798 killing process with pid 69352 00:23:58.798 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:58.798 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:58.798 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69352' 00:23:58.798 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69352 00:23:58.798 [2024-11-20 13:44:01.501092] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:58.798 13:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69352 00:23:59.056 [2024-11-20 13:44:01.722101] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:59.998 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BfwAgiDSxv 00:23:59.998 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:23:59.998 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:23:59.998 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:23:59.998 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:23:59.998 ************************************ 00:23:59.998 END TEST raid_read_error_test 00:23:59.998 ************************************ 00:23:59.998 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:59.998 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:23:59.998 13:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:23:59.998 00:23:59.998 real 0m4.834s 00:23:59.998 user 0m6.006s 00:23:59.998 sys 0m0.598s 00:23:59.998 13:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:59.998 13:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.258 13:44:02 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:24:00.258 13:44:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:00.258 13:44:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:00.258 13:44:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:00.258 ************************************ 00:24:00.258 START TEST raid_write_error_test 00:24:00.258 ************************************ 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nLmXCQuLE5 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69498 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69498 00:24:00.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69498 ']' 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.258 13:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.258 [2024-11-20 13:44:03.090828] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:24:00.258 [2024-11-20 13:44:03.091446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69498 ] 00:24:00.517 [2024-11-20 13:44:03.289222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.776 [2024-11-20 13:44:03.463381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.035 [2024-11-20 13:44:03.719050] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:01.035 [2024-11-20 13:44:03.719124] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.294 BaseBdev1_malloc 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.294 true 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.294 [2024-11-20 13:44:04.133264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:01.294 [2024-11-20 13:44:04.133373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:01.294 [2024-11-20 13:44:04.133406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:01.294 [2024-11-20 13:44:04.133426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:01.294 [2024-11-20 13:44:04.136513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:01.294 [2024-11-20 13:44:04.136820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:01.294 BaseBdev1 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.294 BaseBdev2_malloc 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.294 true 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.294 [2024-11-20 13:44:04.198552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:01.294 [2024-11-20 13:44:04.198649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:01.294 [2024-11-20 13:44:04.198675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:01.294 [2024-11-20 13:44:04.198694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:01.294 [2024-11-20 13:44:04.201815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:01.294 [2024-11-20 13:44:04.201866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:01.294 BaseBdev2 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.294 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.554 BaseBdev3_malloc 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.554 true 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.554 [2024-11-20 13:44:04.276317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:01.554 [2024-11-20 13:44:04.276660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:01.554 [2024-11-20 13:44:04.276699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:01.554 [2024-11-20 13:44:04.276720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:01.554 [2024-11-20 13:44:04.279959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:01.554 [2024-11-20 13:44:04.280022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:01.554 BaseBdev3 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.554 [2024-11-20 13:44:04.284429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:01.554 [2024-11-20 13:44:04.287307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:01.554 [2024-11-20 13:44:04.287417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:01.554 [2024-11-20 13:44:04.287707] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:01.554 [2024-11-20 13:44:04.287728] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:01.554 [2024-11-20 13:44:04.288065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:24:01.554 [2024-11-20 13:44:04.288324] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:01.554 [2024-11-20 13:44:04.288369] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:24:01.554 [2024-11-20 13:44:04.288634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:01.554 "name": "raid_bdev1", 00:24:01.554 "uuid": "28aec0aa-9247-405e-954c-167f9301b4d4", 00:24:01.554 "strip_size_kb": 0, 00:24:01.554 "state": "online", 00:24:01.554 "raid_level": "raid1", 00:24:01.554 "superblock": true, 00:24:01.554 "num_base_bdevs": 3, 00:24:01.554 "num_base_bdevs_discovered": 3, 00:24:01.554 "num_base_bdevs_operational": 3, 00:24:01.554 "base_bdevs_list": [ 00:24:01.554 { 00:24:01.554 "name": "BaseBdev1", 00:24:01.554 "uuid": "e1b76a04-cbf5-5225-9406-e0382ef45d8e", 00:24:01.554 "is_configured": true, 00:24:01.554 "data_offset": 2048, 00:24:01.554 "data_size": 63488 00:24:01.554 }, 00:24:01.554 { 00:24:01.554 "name": "BaseBdev2", 00:24:01.554 "uuid": "ff04c5e7-f7d1-553e-b6cf-4d467fe11c79", 00:24:01.554 "is_configured": true, 00:24:01.554 "data_offset": 2048, 00:24:01.554 "data_size": 63488 00:24:01.554 }, 00:24:01.554 { 00:24:01.554 "name": "BaseBdev3", 00:24:01.554 "uuid": "7183e09d-ce80-5a45-bde6-f8a781103318", 00:24:01.554 "is_configured": true, 00:24:01.554 "data_offset": 2048, 00:24:01.554 "data_size": 63488 00:24:01.554 } 00:24:01.554 ] 00:24:01.554 }' 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:01.554 13:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:02.123 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:24:02.123 13:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:02.123 [2024-11-20 13:44:04.914506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:24:03.059 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.060 [2024-11-20 13:44:05.803804] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:24:03.060 [2024-11-20 13:44:05.804326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:03.060 [2024-11-20 13:44:05.804665] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:03.060 "name": "raid_bdev1", 00:24:03.060 "uuid": "28aec0aa-9247-405e-954c-167f9301b4d4", 00:24:03.060 "strip_size_kb": 0, 00:24:03.060 "state": "online", 00:24:03.060 "raid_level": "raid1", 00:24:03.060 "superblock": true, 00:24:03.060 "num_base_bdevs": 3, 00:24:03.060 "num_base_bdevs_discovered": 2, 00:24:03.060 "num_base_bdevs_operational": 2, 00:24:03.060 "base_bdevs_list": [ 00:24:03.060 { 00:24:03.060 "name": null, 00:24:03.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.060 "is_configured": false, 00:24:03.060 "data_offset": 0, 00:24:03.060 "data_size": 63488 00:24:03.060 }, 00:24:03.060 { 00:24:03.060 "name": "BaseBdev2", 00:24:03.060 "uuid": "ff04c5e7-f7d1-553e-b6cf-4d467fe11c79", 00:24:03.060 "is_configured": true, 00:24:03.060 "data_offset": 2048, 00:24:03.060 "data_size": 63488 00:24:03.060 }, 00:24:03.060 { 00:24:03.060 "name": "BaseBdev3", 00:24:03.060 "uuid": "7183e09d-ce80-5a45-bde6-f8a781103318", 00:24:03.060 "is_configured": true, 00:24:03.060 "data_offset": 2048, 00:24:03.060 "data_size": 63488 00:24:03.060 } 00:24:03.060 ] 00:24:03.060 }' 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:03.060 13:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.693 13:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:03.693 13:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.693 13:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.693 [2024-11-20 13:44:06.249238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:03.693 [2024-11-20 13:44:06.249618] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:03.693 { 00:24:03.693 "results": [ 00:24:03.693 { 00:24:03.693 "job": "raid_bdev1", 00:24:03.693 "core_mask": "0x1", 00:24:03.693 "workload": "randrw", 00:24:03.693 "percentage": 50, 00:24:03.693 "status": "finished", 00:24:03.693 "queue_depth": 1, 00:24:03.693 "io_size": 131072, 00:24:03.693 "runtime": 1.332192, 00:24:03.693 "iops": 8270.579616151426, 00:24:03.693 "mibps": 1033.8224520189283, 00:24:03.693 "io_failed": 0, 00:24:03.693 "io_timeout": 0, 00:24:03.693 "avg_latency_us": 115.99818610868167, 00:24:03.693 "min_latency_us": 41.192727272727275, 00:24:03.693 "max_latency_us": 2249.0763636363636 00:24:03.693 } 00:24:03.693 ], 00:24:03.693 "core_count": 1 00:24:03.693 } 00:24:03.693 [2024-11-20 13:44:06.253253] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:03.693 [2024-11-20 13:44:06.253410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:03.693 [2024-11-20 13:44:06.253544] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:03.693 [2024-11-20 13:44:06.253571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:24:03.693 13:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.693 13:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69498 00:24:03.693 13:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69498 ']' 00:24:03.693 13:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69498 00:24:03.693 13:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:24:03.693 13:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.693 13:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69498 00:24:03.693 killing process with pid 69498 00:24:03.693 13:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:03.693 13:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:03.693 13:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69498' 00:24:03.693 13:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69498 00:24:03.693 13:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69498 00:24:03.693 [2024-11-20 13:44:06.289801] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:03.693 [2024-11-20 13:44:06.525218] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:05.070 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:24:05.070 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nLmXCQuLE5 00:24:05.070 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:24:05.070 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:24:05.070 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:24:05.070 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:05.070 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:24:05.070 13:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:24:05.070 00:24:05.070 real 0m4.848s 00:24:05.070 user 0m5.806s 00:24:05.070 sys 0m0.677s 00:24:05.070 13:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:05.070 13:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.070 ************************************ 00:24:05.070 END TEST raid_write_error_test 00:24:05.070 ************************************ 00:24:05.070 13:44:07 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:24:05.070 13:44:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:24:05.070 13:44:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:24:05.070 13:44:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:05.070 13:44:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:05.070 13:44:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:05.070 ************************************ 00:24:05.070 START TEST raid_state_function_test 00:24:05.070 ************************************ 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:24:05.070 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:24:05.071 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:24:05.071 Process raid pid: 69646 00:24:05.071 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69646 00:24:05.071 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69646' 00:24:05.071 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69646 00:24:05.071 13:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:05.071 13:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69646 ']' 00:24:05.071 13:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.071 13:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.071 13:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.071 13:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.071 13:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.071 [2024-11-20 13:44:07.950521] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:24:05.071 [2024-11-20 13:44:07.950704] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.329 [2024-11-20 13:44:08.142850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.588 [2024-11-20 13:44:08.297591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.848 [2024-11-20 13:44:08.508137] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:05.848 [2024-11-20 13:44:08.508180] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:06.106 13:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.106 13:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:24:06.107 13:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:06.107 13:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.107 13:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.107 [2024-11-20 13:44:08.970267] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:06.107 [2024-11-20 13:44:08.970397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:06.107 [2024-11-20 13:44:08.970417] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:06.107 [2024-11-20 13:44:08.970434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:06.107 [2024-11-20 13:44:08.970445] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:06.107 [2024-11-20 13:44:08.970460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:06.107 [2024-11-20 13:44:08.970471] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:06.107 [2024-11-20 13:44:08.970486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:06.107 13:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.107 13:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:06.107 13:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:06.107 13:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:06.107 13:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:06.107 13:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:06.107 13:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:06.107 13:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:06.107 13:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:06.107 13:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:06.107 13:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:06.107 13:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:06.107 13:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:06.107 13:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.107 13:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.107 13:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.366 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:06.366 "name": "Existed_Raid", 00:24:06.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.366 "strip_size_kb": 64, 00:24:06.366 "state": "configuring", 00:24:06.366 "raid_level": "raid0", 00:24:06.366 "superblock": false, 00:24:06.366 "num_base_bdevs": 4, 00:24:06.366 "num_base_bdevs_discovered": 0, 00:24:06.366 "num_base_bdevs_operational": 4, 00:24:06.366 "base_bdevs_list": [ 00:24:06.366 { 00:24:06.366 "name": "BaseBdev1", 00:24:06.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.366 "is_configured": false, 00:24:06.366 "data_offset": 0, 00:24:06.366 "data_size": 0 00:24:06.366 }, 00:24:06.366 { 00:24:06.366 "name": "BaseBdev2", 00:24:06.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.366 "is_configured": false, 00:24:06.366 "data_offset": 0, 00:24:06.366 "data_size": 0 00:24:06.366 }, 00:24:06.366 { 00:24:06.366 "name": "BaseBdev3", 00:24:06.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.366 "is_configured": false, 00:24:06.366 "data_offset": 0, 00:24:06.366 "data_size": 0 00:24:06.366 }, 00:24:06.366 { 00:24:06.366 "name": "BaseBdev4", 00:24:06.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.366 "is_configured": false, 00:24:06.366 "data_offset": 0, 00:24:06.366 "data_size": 0 00:24:06.366 } 00:24:06.366 ] 00:24:06.366 }' 00:24:06.366 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:06.366 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.625 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:06.625 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.625 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.625 [2024-11-20 13:44:09.466416] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:06.625 [2024-11-20 13:44:09.466493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:06.625 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.625 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:06.625 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.625 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.625 [2024-11-20 13:44:09.478344] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:06.625 [2024-11-20 13:44:09.478423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:06.625 [2024-11-20 13:44:09.478440] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:06.625 [2024-11-20 13:44:09.478457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:06.625 [2024-11-20 13:44:09.478467] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:06.625 [2024-11-20 13:44:09.478482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:06.625 [2024-11-20 13:44:09.478491] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:06.625 [2024-11-20 13:44:09.478506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:06.625 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.625 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:24:06.625 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.625 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.625 [2024-11-20 13:44:09.529598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:06.625 BaseBdev1 00:24:06.625 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.625 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:06.625 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:24:06.625 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:06.626 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:06.626 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:06.626 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:06.626 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:06.626 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.626 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.885 [ 00:24:06.885 { 00:24:06.885 "name": "BaseBdev1", 00:24:06.885 "aliases": [ 00:24:06.885 "4a85001e-a955-4e82-b45b-e6953ff91101" 00:24:06.885 ], 00:24:06.885 "product_name": "Malloc disk", 00:24:06.885 "block_size": 512, 00:24:06.885 "num_blocks": 65536, 00:24:06.885 "uuid": "4a85001e-a955-4e82-b45b-e6953ff91101", 00:24:06.885 "assigned_rate_limits": { 00:24:06.885 "rw_ios_per_sec": 0, 00:24:06.885 "rw_mbytes_per_sec": 0, 00:24:06.885 "r_mbytes_per_sec": 0, 00:24:06.885 "w_mbytes_per_sec": 0 00:24:06.885 }, 00:24:06.885 "claimed": true, 00:24:06.885 "claim_type": "exclusive_write", 00:24:06.885 "zoned": false, 00:24:06.885 "supported_io_types": { 00:24:06.885 "read": true, 00:24:06.885 "write": true, 00:24:06.885 "unmap": true, 00:24:06.885 "flush": true, 00:24:06.885 "reset": true, 00:24:06.885 "nvme_admin": false, 00:24:06.885 "nvme_io": false, 00:24:06.885 "nvme_io_md": false, 00:24:06.885 "write_zeroes": true, 00:24:06.885 "zcopy": true, 00:24:06.885 "get_zone_info": false, 00:24:06.885 "zone_management": false, 00:24:06.885 "zone_append": false, 00:24:06.885 "compare": false, 00:24:06.885 "compare_and_write": false, 00:24:06.885 "abort": true, 00:24:06.885 "seek_hole": false, 00:24:06.885 "seek_data": false, 00:24:06.885 "copy": true, 00:24:06.885 "nvme_iov_md": false 00:24:06.885 }, 00:24:06.885 "memory_domains": [ 00:24:06.885 { 00:24:06.885 "dma_device_id": "system", 00:24:06.885 "dma_device_type": 1 00:24:06.885 }, 00:24:06.885 { 00:24:06.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.885 "dma_device_type": 2 00:24:06.885 } 00:24:06.885 ], 00:24:06.885 "driver_specific": {} 00:24:06.885 } 00:24:06.885 ] 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:06.885 "name": "Existed_Raid", 00:24:06.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.885 "strip_size_kb": 64, 00:24:06.885 "state": "configuring", 00:24:06.885 "raid_level": "raid0", 00:24:06.885 "superblock": false, 00:24:06.885 "num_base_bdevs": 4, 00:24:06.885 "num_base_bdevs_discovered": 1, 00:24:06.885 "num_base_bdevs_operational": 4, 00:24:06.885 "base_bdevs_list": [ 00:24:06.885 { 00:24:06.885 "name": "BaseBdev1", 00:24:06.885 "uuid": "4a85001e-a955-4e82-b45b-e6953ff91101", 00:24:06.885 "is_configured": true, 00:24:06.885 "data_offset": 0, 00:24:06.885 "data_size": 65536 00:24:06.885 }, 00:24:06.885 { 00:24:06.885 "name": "BaseBdev2", 00:24:06.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.885 "is_configured": false, 00:24:06.885 "data_offset": 0, 00:24:06.885 "data_size": 0 00:24:06.885 }, 00:24:06.885 { 00:24:06.885 "name": "BaseBdev3", 00:24:06.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.885 "is_configured": false, 00:24:06.885 "data_offset": 0, 00:24:06.885 "data_size": 0 00:24:06.885 }, 00:24:06.885 { 00:24:06.885 "name": "BaseBdev4", 00:24:06.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.885 "is_configured": false, 00:24:06.885 "data_offset": 0, 00:24:06.885 "data_size": 0 00:24:06.885 } 00:24:06.885 ] 00:24:06.885 }' 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:06.885 13:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.145 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:07.145 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.145 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.145 [2024-11-20 13:44:10.057917] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:07.404 [2024-11-20 13:44:10.058333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:24:07.404 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.404 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:07.404 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.404 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.404 [2024-11-20 13:44:10.065923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:07.404 [2024-11-20 13:44:10.068463] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:07.404 [2024-11-20 13:44:10.068518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:07.404 [2024-11-20 13:44:10.068534] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:07.404 [2024-11-20 13:44:10.068550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:07.404 [2024-11-20 13:44:10.068560] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:07.404 [2024-11-20 13:44:10.068572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:07.404 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.404 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:07.404 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:07.404 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:07.404 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:07.404 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:07.404 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:07.404 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:07.404 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:07.404 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:07.404 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:07.404 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:07.404 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:07.404 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:07.404 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.404 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.404 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:07.404 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.404 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:07.404 "name": "Existed_Raid", 00:24:07.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.404 "strip_size_kb": 64, 00:24:07.404 "state": "configuring", 00:24:07.404 "raid_level": "raid0", 00:24:07.404 "superblock": false, 00:24:07.404 "num_base_bdevs": 4, 00:24:07.404 "num_base_bdevs_discovered": 1, 00:24:07.404 "num_base_bdevs_operational": 4, 00:24:07.404 "base_bdevs_list": [ 00:24:07.404 { 00:24:07.404 "name": "BaseBdev1", 00:24:07.404 "uuid": "4a85001e-a955-4e82-b45b-e6953ff91101", 00:24:07.404 "is_configured": true, 00:24:07.404 "data_offset": 0, 00:24:07.404 "data_size": 65536 00:24:07.404 }, 00:24:07.404 { 00:24:07.404 "name": "BaseBdev2", 00:24:07.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.404 "is_configured": false, 00:24:07.404 "data_offset": 0, 00:24:07.404 "data_size": 0 00:24:07.405 }, 00:24:07.405 { 00:24:07.405 "name": "BaseBdev3", 00:24:07.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.405 "is_configured": false, 00:24:07.405 "data_offset": 0, 00:24:07.405 "data_size": 0 00:24:07.405 }, 00:24:07.405 { 00:24:07.405 "name": "BaseBdev4", 00:24:07.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.405 "is_configured": false, 00:24:07.405 "data_offset": 0, 00:24:07.405 "data_size": 0 00:24:07.405 } 00:24:07.405 ] 00:24:07.405 }' 00:24:07.405 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:07.405 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.665 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:07.665 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.665 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.924 BaseBdev2 00:24:07.924 [2024-11-20 13:44:10.622388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:07.924 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.925 [ 00:24:07.925 { 00:24:07.925 "name": "BaseBdev2", 00:24:07.925 "aliases": [ 00:24:07.925 "f71c32f4-2dbf-4ffc-a0be-4045ed894a99" 00:24:07.925 ], 00:24:07.925 "product_name": "Malloc disk", 00:24:07.925 "block_size": 512, 00:24:07.925 "num_blocks": 65536, 00:24:07.925 "uuid": "f71c32f4-2dbf-4ffc-a0be-4045ed894a99", 00:24:07.925 "assigned_rate_limits": { 00:24:07.925 "rw_ios_per_sec": 0, 00:24:07.925 "rw_mbytes_per_sec": 0, 00:24:07.925 "r_mbytes_per_sec": 0, 00:24:07.925 "w_mbytes_per_sec": 0 00:24:07.925 }, 00:24:07.925 "claimed": true, 00:24:07.925 "claim_type": "exclusive_write", 00:24:07.925 "zoned": false, 00:24:07.925 "supported_io_types": { 00:24:07.925 "read": true, 00:24:07.925 "write": true, 00:24:07.925 "unmap": true, 00:24:07.925 "flush": true, 00:24:07.925 "reset": true, 00:24:07.925 "nvme_admin": false, 00:24:07.925 "nvme_io": false, 00:24:07.925 "nvme_io_md": false, 00:24:07.925 "write_zeroes": true, 00:24:07.925 "zcopy": true, 00:24:07.925 "get_zone_info": false, 00:24:07.925 "zone_management": false, 00:24:07.925 "zone_append": false, 00:24:07.925 "compare": false, 00:24:07.925 "compare_and_write": false, 00:24:07.925 "abort": true, 00:24:07.925 "seek_hole": false, 00:24:07.925 "seek_data": false, 00:24:07.925 "copy": true, 00:24:07.925 "nvme_iov_md": false 00:24:07.925 }, 00:24:07.925 "memory_domains": [ 00:24:07.925 { 00:24:07.925 "dma_device_id": "system", 00:24:07.925 "dma_device_type": 1 00:24:07.925 }, 00:24:07.925 { 00:24:07.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:07.925 "dma_device_type": 2 00:24:07.925 } 00:24:07.925 ], 00:24:07.925 "driver_specific": {} 00:24:07.925 } 00:24:07.925 ] 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:07.925 "name": "Existed_Raid", 00:24:07.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.925 "strip_size_kb": 64, 00:24:07.925 "state": "configuring", 00:24:07.925 "raid_level": "raid0", 00:24:07.925 "superblock": false, 00:24:07.925 "num_base_bdevs": 4, 00:24:07.925 "num_base_bdevs_discovered": 2, 00:24:07.925 "num_base_bdevs_operational": 4, 00:24:07.925 "base_bdevs_list": [ 00:24:07.925 { 00:24:07.925 "name": "BaseBdev1", 00:24:07.925 "uuid": "4a85001e-a955-4e82-b45b-e6953ff91101", 00:24:07.925 "is_configured": true, 00:24:07.925 "data_offset": 0, 00:24:07.925 "data_size": 65536 00:24:07.925 }, 00:24:07.925 { 00:24:07.925 "name": "BaseBdev2", 00:24:07.925 "uuid": "f71c32f4-2dbf-4ffc-a0be-4045ed894a99", 00:24:07.925 "is_configured": true, 00:24:07.925 "data_offset": 0, 00:24:07.925 "data_size": 65536 00:24:07.925 }, 00:24:07.925 { 00:24:07.925 "name": "BaseBdev3", 00:24:07.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.925 "is_configured": false, 00:24:07.925 "data_offset": 0, 00:24:07.925 "data_size": 0 00:24:07.925 }, 00:24:07.925 { 00:24:07.925 "name": "BaseBdev4", 00:24:07.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.925 "is_configured": false, 00:24:07.925 "data_offset": 0, 00:24:07.925 "data_size": 0 00:24:07.925 } 00:24:07.925 ] 00:24:07.925 }' 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:07.925 13:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.494 [2024-11-20 13:44:11.215194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:08.494 BaseBdev3 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.494 [ 00:24:08.494 { 00:24:08.494 "name": "BaseBdev3", 00:24:08.494 "aliases": [ 00:24:08.494 "c1f772da-26c8-411e-a9d0-0c442ab7a897" 00:24:08.494 ], 00:24:08.494 "product_name": "Malloc disk", 00:24:08.494 "block_size": 512, 00:24:08.494 "num_blocks": 65536, 00:24:08.494 "uuid": "c1f772da-26c8-411e-a9d0-0c442ab7a897", 00:24:08.494 "assigned_rate_limits": { 00:24:08.494 "rw_ios_per_sec": 0, 00:24:08.494 "rw_mbytes_per_sec": 0, 00:24:08.494 "r_mbytes_per_sec": 0, 00:24:08.494 "w_mbytes_per_sec": 0 00:24:08.494 }, 00:24:08.494 "claimed": true, 00:24:08.494 "claim_type": "exclusive_write", 00:24:08.494 "zoned": false, 00:24:08.494 "supported_io_types": { 00:24:08.494 "read": true, 00:24:08.494 "write": true, 00:24:08.494 "unmap": true, 00:24:08.494 "flush": true, 00:24:08.494 "reset": true, 00:24:08.494 "nvme_admin": false, 00:24:08.494 "nvme_io": false, 00:24:08.494 "nvme_io_md": false, 00:24:08.494 "write_zeroes": true, 00:24:08.494 "zcopy": true, 00:24:08.494 "get_zone_info": false, 00:24:08.494 "zone_management": false, 00:24:08.494 "zone_append": false, 00:24:08.494 "compare": false, 00:24:08.494 "compare_and_write": false, 00:24:08.494 "abort": true, 00:24:08.494 "seek_hole": false, 00:24:08.494 "seek_data": false, 00:24:08.494 "copy": true, 00:24:08.494 "nvme_iov_md": false 00:24:08.494 }, 00:24:08.494 "memory_domains": [ 00:24:08.494 { 00:24:08.494 "dma_device_id": "system", 00:24:08.494 "dma_device_type": 1 00:24:08.494 }, 00:24:08.494 { 00:24:08.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:08.494 "dma_device_type": 2 00:24:08.494 } 00:24:08.494 ], 00:24:08.494 "driver_specific": {} 00:24:08.494 } 00:24:08.494 ] 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:08.494 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:08.495 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:08.495 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:08.495 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.495 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:08.495 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.495 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.495 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.495 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:08.495 "name": "Existed_Raid", 00:24:08.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.495 "strip_size_kb": 64, 00:24:08.495 "state": "configuring", 00:24:08.495 "raid_level": "raid0", 00:24:08.495 "superblock": false, 00:24:08.495 "num_base_bdevs": 4, 00:24:08.495 "num_base_bdevs_discovered": 3, 00:24:08.495 "num_base_bdevs_operational": 4, 00:24:08.495 "base_bdevs_list": [ 00:24:08.495 { 00:24:08.495 "name": "BaseBdev1", 00:24:08.495 "uuid": "4a85001e-a955-4e82-b45b-e6953ff91101", 00:24:08.495 "is_configured": true, 00:24:08.495 "data_offset": 0, 00:24:08.495 "data_size": 65536 00:24:08.495 }, 00:24:08.495 { 00:24:08.495 "name": "BaseBdev2", 00:24:08.495 "uuid": "f71c32f4-2dbf-4ffc-a0be-4045ed894a99", 00:24:08.495 "is_configured": true, 00:24:08.495 "data_offset": 0, 00:24:08.495 "data_size": 65536 00:24:08.495 }, 00:24:08.495 { 00:24:08.495 "name": "BaseBdev3", 00:24:08.495 "uuid": "c1f772da-26c8-411e-a9d0-0c442ab7a897", 00:24:08.495 "is_configured": true, 00:24:08.495 "data_offset": 0, 00:24:08.495 "data_size": 65536 00:24:08.495 }, 00:24:08.495 { 00:24:08.495 "name": "BaseBdev4", 00:24:08.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.495 "is_configured": false, 00:24:08.495 "data_offset": 0, 00:24:08.495 "data_size": 0 00:24:08.495 } 00:24:08.495 ] 00:24:08.495 }' 00:24:08.495 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:08.495 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.062 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:24:09.062 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.062 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.062 [2024-11-20 13:44:11.790099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:09.062 [2024-11-20 13:44:11.790189] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:09.062 [2024-11-20 13:44:11.790205] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:24:09.062 [2024-11-20 13:44:11.790626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:09.062 [2024-11-20 13:44:11.790865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:09.062 [2024-11-20 13:44:11.790888] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:24:09.062 [2024-11-20 13:44:11.791567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:09.062 BaseBdev4 00:24:09.062 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.062 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:24:09.062 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:24:09.062 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:09.062 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:09.062 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:09.062 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:09.062 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:09.062 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.062 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.062 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.062 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:09.062 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.062 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.062 [ 00:24:09.062 { 00:24:09.062 "name": "BaseBdev4", 00:24:09.062 "aliases": [ 00:24:09.062 "33db6741-04c0-4f6c-aae1-568772dec095" 00:24:09.062 ], 00:24:09.062 "product_name": "Malloc disk", 00:24:09.062 "block_size": 512, 00:24:09.062 "num_blocks": 65536, 00:24:09.062 "uuid": "33db6741-04c0-4f6c-aae1-568772dec095", 00:24:09.062 "assigned_rate_limits": { 00:24:09.062 "rw_ios_per_sec": 0, 00:24:09.062 "rw_mbytes_per_sec": 0, 00:24:09.062 "r_mbytes_per_sec": 0, 00:24:09.062 "w_mbytes_per_sec": 0 00:24:09.062 }, 00:24:09.062 "claimed": true, 00:24:09.062 "claim_type": "exclusive_write", 00:24:09.062 "zoned": false, 00:24:09.062 "supported_io_types": { 00:24:09.062 "read": true, 00:24:09.062 "write": true, 00:24:09.062 "unmap": true, 00:24:09.062 "flush": true, 00:24:09.062 "reset": true, 00:24:09.062 "nvme_admin": false, 00:24:09.062 "nvme_io": false, 00:24:09.062 "nvme_io_md": false, 00:24:09.062 "write_zeroes": true, 00:24:09.062 "zcopy": true, 00:24:09.062 "get_zone_info": false, 00:24:09.062 "zone_management": false, 00:24:09.062 "zone_append": false, 00:24:09.062 "compare": false, 00:24:09.062 "compare_and_write": false, 00:24:09.062 "abort": true, 00:24:09.062 "seek_hole": false, 00:24:09.062 "seek_data": false, 00:24:09.062 "copy": true, 00:24:09.062 "nvme_iov_md": false 00:24:09.062 }, 00:24:09.062 "memory_domains": [ 00:24:09.062 { 00:24:09.063 "dma_device_id": "system", 00:24:09.063 "dma_device_type": 1 00:24:09.063 }, 00:24:09.063 { 00:24:09.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:09.063 "dma_device_type": 2 00:24:09.063 } 00:24:09.063 ], 00:24:09.063 "driver_specific": {} 00:24:09.063 } 00:24:09.063 ] 00:24:09.063 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.063 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:09.063 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:09.063 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:09.063 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:24:09.063 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:09.063 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:09.063 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:09.063 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:09.063 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:09.063 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:09.063 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:09.063 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:09.063 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:09.063 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:09.063 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:09.063 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.063 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.063 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.063 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:09.063 "name": "Existed_Raid", 00:24:09.063 "uuid": "d2dcf89b-a2fb-4860-9ccf-66097304a4d3", 00:24:09.063 "strip_size_kb": 64, 00:24:09.063 "state": "online", 00:24:09.063 "raid_level": "raid0", 00:24:09.063 "superblock": false, 00:24:09.063 "num_base_bdevs": 4, 00:24:09.063 "num_base_bdevs_discovered": 4, 00:24:09.063 "num_base_bdevs_operational": 4, 00:24:09.063 "base_bdevs_list": [ 00:24:09.063 { 00:24:09.063 "name": "BaseBdev1", 00:24:09.063 "uuid": "4a85001e-a955-4e82-b45b-e6953ff91101", 00:24:09.063 "is_configured": true, 00:24:09.063 "data_offset": 0, 00:24:09.063 "data_size": 65536 00:24:09.063 }, 00:24:09.063 { 00:24:09.063 "name": "BaseBdev2", 00:24:09.063 "uuid": "f71c32f4-2dbf-4ffc-a0be-4045ed894a99", 00:24:09.063 "is_configured": true, 00:24:09.063 "data_offset": 0, 00:24:09.063 "data_size": 65536 00:24:09.063 }, 00:24:09.063 { 00:24:09.063 "name": "BaseBdev3", 00:24:09.063 "uuid": "c1f772da-26c8-411e-a9d0-0c442ab7a897", 00:24:09.063 "is_configured": true, 00:24:09.063 "data_offset": 0, 00:24:09.063 "data_size": 65536 00:24:09.063 }, 00:24:09.063 { 00:24:09.063 "name": "BaseBdev4", 00:24:09.063 "uuid": "33db6741-04c0-4f6c-aae1-568772dec095", 00:24:09.063 "is_configured": true, 00:24:09.063 "data_offset": 0, 00:24:09.063 "data_size": 65536 00:24:09.063 } 00:24:09.063 ] 00:24:09.063 }' 00:24:09.063 13:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:09.063 13:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.629 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:09.629 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:09.629 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:09.629 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:09.629 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:09.629 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:09.629 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:09.629 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:09.629 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.629 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.629 [2024-11-20 13:44:12.371038] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:09.629 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.629 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:09.629 "name": "Existed_Raid", 00:24:09.629 "aliases": [ 00:24:09.629 "d2dcf89b-a2fb-4860-9ccf-66097304a4d3" 00:24:09.629 ], 00:24:09.629 "product_name": "Raid Volume", 00:24:09.629 "block_size": 512, 00:24:09.629 "num_blocks": 262144, 00:24:09.629 "uuid": "d2dcf89b-a2fb-4860-9ccf-66097304a4d3", 00:24:09.629 "assigned_rate_limits": { 00:24:09.629 "rw_ios_per_sec": 0, 00:24:09.629 "rw_mbytes_per_sec": 0, 00:24:09.629 "r_mbytes_per_sec": 0, 00:24:09.629 "w_mbytes_per_sec": 0 00:24:09.629 }, 00:24:09.629 "claimed": false, 00:24:09.629 "zoned": false, 00:24:09.629 "supported_io_types": { 00:24:09.629 "read": true, 00:24:09.629 "write": true, 00:24:09.629 "unmap": true, 00:24:09.629 "flush": true, 00:24:09.629 "reset": true, 00:24:09.629 "nvme_admin": false, 00:24:09.629 "nvme_io": false, 00:24:09.629 "nvme_io_md": false, 00:24:09.629 "write_zeroes": true, 00:24:09.629 "zcopy": false, 00:24:09.629 "get_zone_info": false, 00:24:09.629 "zone_management": false, 00:24:09.629 "zone_append": false, 00:24:09.629 "compare": false, 00:24:09.629 "compare_and_write": false, 00:24:09.629 "abort": false, 00:24:09.629 "seek_hole": false, 00:24:09.629 "seek_data": false, 00:24:09.629 "copy": false, 00:24:09.629 "nvme_iov_md": false 00:24:09.629 }, 00:24:09.629 "memory_domains": [ 00:24:09.629 { 00:24:09.629 "dma_device_id": "system", 00:24:09.629 "dma_device_type": 1 00:24:09.629 }, 00:24:09.629 { 00:24:09.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:09.629 "dma_device_type": 2 00:24:09.629 }, 00:24:09.629 { 00:24:09.629 "dma_device_id": "system", 00:24:09.630 "dma_device_type": 1 00:24:09.630 }, 00:24:09.630 { 00:24:09.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:09.630 "dma_device_type": 2 00:24:09.630 }, 00:24:09.630 { 00:24:09.630 "dma_device_id": "system", 00:24:09.630 "dma_device_type": 1 00:24:09.630 }, 00:24:09.630 { 00:24:09.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:09.630 "dma_device_type": 2 00:24:09.630 }, 00:24:09.630 { 00:24:09.630 "dma_device_id": "system", 00:24:09.630 "dma_device_type": 1 00:24:09.630 }, 00:24:09.630 { 00:24:09.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:09.630 "dma_device_type": 2 00:24:09.630 } 00:24:09.630 ], 00:24:09.630 "driver_specific": { 00:24:09.630 "raid": { 00:24:09.630 "uuid": "d2dcf89b-a2fb-4860-9ccf-66097304a4d3", 00:24:09.630 "strip_size_kb": 64, 00:24:09.630 "state": "online", 00:24:09.630 "raid_level": "raid0", 00:24:09.630 "superblock": false, 00:24:09.630 "num_base_bdevs": 4, 00:24:09.630 "num_base_bdevs_discovered": 4, 00:24:09.630 "num_base_bdevs_operational": 4, 00:24:09.630 "base_bdevs_list": [ 00:24:09.630 { 00:24:09.630 "name": "BaseBdev1", 00:24:09.630 "uuid": "4a85001e-a955-4e82-b45b-e6953ff91101", 00:24:09.630 "is_configured": true, 00:24:09.630 "data_offset": 0, 00:24:09.630 "data_size": 65536 00:24:09.630 }, 00:24:09.630 { 00:24:09.630 "name": "BaseBdev2", 00:24:09.630 "uuid": "f71c32f4-2dbf-4ffc-a0be-4045ed894a99", 00:24:09.630 "is_configured": true, 00:24:09.630 "data_offset": 0, 00:24:09.630 "data_size": 65536 00:24:09.630 }, 00:24:09.630 { 00:24:09.630 "name": "BaseBdev3", 00:24:09.630 "uuid": "c1f772da-26c8-411e-a9d0-0c442ab7a897", 00:24:09.630 "is_configured": true, 00:24:09.630 "data_offset": 0, 00:24:09.630 "data_size": 65536 00:24:09.630 }, 00:24:09.630 { 00:24:09.630 "name": "BaseBdev4", 00:24:09.630 "uuid": "33db6741-04c0-4f6c-aae1-568772dec095", 00:24:09.630 "is_configured": true, 00:24:09.630 "data_offset": 0, 00:24:09.630 "data_size": 65536 00:24:09.630 } 00:24:09.630 ] 00:24:09.630 } 00:24:09.630 } 00:24:09.630 }' 00:24:09.630 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:09.630 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:09.630 BaseBdev2 00:24:09.630 BaseBdev3 00:24:09.630 BaseBdev4' 00:24:09.630 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:09.630 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:09.630 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:09.630 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:09.630 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.630 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.630 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.889 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.889 [2024-11-20 13:44:12.746568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:09.889 [2024-11-20 13:44:12.746609] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:09.889 [2024-11-20 13:44:12.746707] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:10.148 "name": "Existed_Raid", 00:24:10.148 "uuid": "d2dcf89b-a2fb-4860-9ccf-66097304a4d3", 00:24:10.148 "strip_size_kb": 64, 00:24:10.148 "state": "offline", 00:24:10.148 "raid_level": "raid0", 00:24:10.148 "superblock": false, 00:24:10.148 "num_base_bdevs": 4, 00:24:10.148 "num_base_bdevs_discovered": 3, 00:24:10.148 "num_base_bdevs_operational": 3, 00:24:10.148 "base_bdevs_list": [ 00:24:10.148 { 00:24:10.148 "name": null, 00:24:10.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.148 "is_configured": false, 00:24:10.148 "data_offset": 0, 00:24:10.148 "data_size": 65536 00:24:10.148 }, 00:24:10.148 { 00:24:10.148 "name": "BaseBdev2", 00:24:10.148 "uuid": "f71c32f4-2dbf-4ffc-a0be-4045ed894a99", 00:24:10.148 "is_configured": true, 00:24:10.148 "data_offset": 0, 00:24:10.148 "data_size": 65536 00:24:10.148 }, 00:24:10.148 { 00:24:10.148 "name": "BaseBdev3", 00:24:10.148 "uuid": "c1f772da-26c8-411e-a9d0-0c442ab7a897", 00:24:10.148 "is_configured": true, 00:24:10.148 "data_offset": 0, 00:24:10.148 "data_size": 65536 00:24:10.148 }, 00:24:10.148 { 00:24:10.148 "name": "BaseBdev4", 00:24:10.148 "uuid": "33db6741-04c0-4f6c-aae1-568772dec095", 00:24:10.148 "is_configured": true, 00:24:10.148 "data_offset": 0, 00:24:10.148 "data_size": 65536 00:24:10.148 } 00:24:10.148 ] 00:24:10.148 }' 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:10.148 13:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.716 [2024-11-20 13:44:13.375706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.716 [2024-11-20 13:44:13.521985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.716 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.976 [2024-11-20 13:44:13.667354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:10.976 [2024-11-20 13:44:13.667419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.976 BaseBdev2 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.976 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.976 [ 00:24:10.976 { 00:24:10.976 "name": "BaseBdev2", 00:24:10.976 "aliases": [ 00:24:10.976 "33153781-a967-44d8-ae18-df7448a17315" 00:24:10.976 ], 00:24:10.976 "product_name": "Malloc disk", 00:24:10.976 "block_size": 512, 00:24:10.976 "num_blocks": 65536, 00:24:10.976 "uuid": "33153781-a967-44d8-ae18-df7448a17315", 00:24:10.976 "assigned_rate_limits": { 00:24:10.976 "rw_ios_per_sec": 0, 00:24:10.976 "rw_mbytes_per_sec": 0, 00:24:10.976 "r_mbytes_per_sec": 0, 00:24:10.976 "w_mbytes_per_sec": 0 00:24:10.976 }, 00:24:10.976 "claimed": false, 00:24:10.976 "zoned": false, 00:24:10.976 "supported_io_types": { 00:24:10.976 "read": true, 00:24:10.976 "write": true, 00:24:10.976 "unmap": true, 00:24:10.976 "flush": true, 00:24:10.976 "reset": true, 00:24:10.976 "nvme_admin": false, 00:24:10.976 "nvme_io": false, 00:24:10.976 "nvme_io_md": false, 00:24:10.976 "write_zeroes": true, 00:24:10.976 "zcopy": true, 00:24:10.976 "get_zone_info": false, 00:24:10.976 "zone_management": false, 00:24:10.976 "zone_append": false, 00:24:10.976 "compare": false, 00:24:10.976 "compare_and_write": false, 00:24:10.976 "abort": true, 00:24:10.976 "seek_hole": false, 00:24:10.976 "seek_data": false, 00:24:10.976 "copy": true, 00:24:10.976 "nvme_iov_md": false 00:24:10.976 }, 00:24:10.976 "memory_domains": [ 00:24:10.976 { 00:24:10.976 "dma_device_id": "system", 00:24:10.976 "dma_device_type": 1 00:24:10.976 }, 00:24:10.976 { 00:24:10.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:11.235 "dma_device_type": 2 00:24:11.235 } 00:24:11.235 ], 00:24:11.235 "driver_specific": {} 00:24:11.235 } 00:24:11.235 ] 00:24:11.235 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.235 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:11.235 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:11.235 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.236 BaseBdev3 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.236 [ 00:24:11.236 { 00:24:11.236 "name": "BaseBdev3", 00:24:11.236 "aliases": [ 00:24:11.236 "153cc196-efd9-4e09-bc26-9edd25d489af" 00:24:11.236 ], 00:24:11.236 "product_name": "Malloc disk", 00:24:11.236 "block_size": 512, 00:24:11.236 "num_blocks": 65536, 00:24:11.236 "uuid": "153cc196-efd9-4e09-bc26-9edd25d489af", 00:24:11.236 "assigned_rate_limits": { 00:24:11.236 "rw_ios_per_sec": 0, 00:24:11.236 "rw_mbytes_per_sec": 0, 00:24:11.236 "r_mbytes_per_sec": 0, 00:24:11.236 "w_mbytes_per_sec": 0 00:24:11.236 }, 00:24:11.236 "claimed": false, 00:24:11.236 "zoned": false, 00:24:11.236 "supported_io_types": { 00:24:11.236 "read": true, 00:24:11.236 "write": true, 00:24:11.236 "unmap": true, 00:24:11.236 "flush": true, 00:24:11.236 "reset": true, 00:24:11.236 "nvme_admin": false, 00:24:11.236 "nvme_io": false, 00:24:11.236 "nvme_io_md": false, 00:24:11.236 "write_zeroes": true, 00:24:11.236 "zcopy": true, 00:24:11.236 "get_zone_info": false, 00:24:11.236 "zone_management": false, 00:24:11.236 "zone_append": false, 00:24:11.236 "compare": false, 00:24:11.236 "compare_and_write": false, 00:24:11.236 "abort": true, 00:24:11.236 "seek_hole": false, 00:24:11.236 "seek_data": false, 00:24:11.236 "copy": true, 00:24:11.236 "nvme_iov_md": false 00:24:11.236 }, 00:24:11.236 "memory_domains": [ 00:24:11.236 { 00:24:11.236 "dma_device_id": "system", 00:24:11.236 "dma_device_type": 1 00:24:11.236 }, 00:24:11.236 { 00:24:11.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:11.236 "dma_device_type": 2 00:24:11.236 } 00:24:11.236 ], 00:24:11.236 "driver_specific": {} 00:24:11.236 } 00:24:11.236 ] 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.236 13:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.236 BaseBdev4 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.236 [ 00:24:11.236 { 00:24:11.236 "name": "BaseBdev4", 00:24:11.236 "aliases": [ 00:24:11.236 "55751095-a22a-47c3-a2ce-6e1841669348" 00:24:11.236 ], 00:24:11.236 "product_name": "Malloc disk", 00:24:11.236 "block_size": 512, 00:24:11.236 "num_blocks": 65536, 00:24:11.236 "uuid": "55751095-a22a-47c3-a2ce-6e1841669348", 00:24:11.236 "assigned_rate_limits": { 00:24:11.236 "rw_ios_per_sec": 0, 00:24:11.236 "rw_mbytes_per_sec": 0, 00:24:11.236 "r_mbytes_per_sec": 0, 00:24:11.236 "w_mbytes_per_sec": 0 00:24:11.236 }, 00:24:11.236 "claimed": false, 00:24:11.236 "zoned": false, 00:24:11.236 "supported_io_types": { 00:24:11.236 "read": true, 00:24:11.236 "write": true, 00:24:11.236 "unmap": true, 00:24:11.236 "flush": true, 00:24:11.236 "reset": true, 00:24:11.236 "nvme_admin": false, 00:24:11.236 "nvme_io": false, 00:24:11.236 "nvme_io_md": false, 00:24:11.236 "write_zeroes": true, 00:24:11.236 "zcopy": true, 00:24:11.236 "get_zone_info": false, 00:24:11.236 "zone_management": false, 00:24:11.236 "zone_append": false, 00:24:11.236 "compare": false, 00:24:11.236 "compare_and_write": false, 00:24:11.236 "abort": true, 00:24:11.236 "seek_hole": false, 00:24:11.236 "seek_data": false, 00:24:11.236 "copy": true, 00:24:11.236 "nvme_iov_md": false 00:24:11.236 }, 00:24:11.236 "memory_domains": [ 00:24:11.236 { 00:24:11.236 "dma_device_id": "system", 00:24:11.236 "dma_device_type": 1 00:24:11.236 }, 00:24:11.236 { 00:24:11.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:11.236 "dma_device_type": 2 00:24:11.236 } 00:24:11.236 ], 00:24:11.236 "driver_specific": {} 00:24:11.236 } 00:24:11.236 ] 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.236 [2024-11-20 13:44:14.049680] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:11.236 [2024-11-20 13:44:14.049747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:11.236 [2024-11-20 13:44:14.049787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:11.236 [2024-11-20 13:44:14.052533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:11.236 [2024-11-20 13:44:14.052608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.236 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:11.236 "name": "Existed_Raid", 00:24:11.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:11.236 "strip_size_kb": 64, 00:24:11.236 "state": "configuring", 00:24:11.236 "raid_level": "raid0", 00:24:11.237 "superblock": false, 00:24:11.237 "num_base_bdevs": 4, 00:24:11.237 "num_base_bdevs_discovered": 3, 00:24:11.237 "num_base_bdevs_operational": 4, 00:24:11.237 "base_bdevs_list": [ 00:24:11.237 { 00:24:11.237 "name": "BaseBdev1", 00:24:11.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:11.237 "is_configured": false, 00:24:11.237 "data_offset": 0, 00:24:11.237 "data_size": 0 00:24:11.237 }, 00:24:11.237 { 00:24:11.237 "name": "BaseBdev2", 00:24:11.237 "uuid": "33153781-a967-44d8-ae18-df7448a17315", 00:24:11.237 "is_configured": true, 00:24:11.237 "data_offset": 0, 00:24:11.237 "data_size": 65536 00:24:11.237 }, 00:24:11.237 { 00:24:11.237 "name": "BaseBdev3", 00:24:11.237 "uuid": "153cc196-efd9-4e09-bc26-9edd25d489af", 00:24:11.237 "is_configured": true, 00:24:11.237 "data_offset": 0, 00:24:11.237 "data_size": 65536 00:24:11.237 }, 00:24:11.237 { 00:24:11.237 "name": "BaseBdev4", 00:24:11.237 "uuid": "55751095-a22a-47c3-a2ce-6e1841669348", 00:24:11.237 "is_configured": true, 00:24:11.237 "data_offset": 0, 00:24:11.237 "data_size": 65536 00:24:11.237 } 00:24:11.237 ] 00:24:11.237 }' 00:24:11.237 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:11.237 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.804 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:24:11.804 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.804 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.804 [2024-11-20 13:44:14.593792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:11.804 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.804 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:11.804 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:11.804 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:11.804 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:11.804 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:11.804 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:11.804 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:11.804 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:11.804 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:11.804 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:11.804 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:11.804 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.804 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.804 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.804 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.804 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:11.804 "name": "Existed_Raid", 00:24:11.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:11.804 "strip_size_kb": 64, 00:24:11.804 "state": "configuring", 00:24:11.804 "raid_level": "raid0", 00:24:11.804 "superblock": false, 00:24:11.804 "num_base_bdevs": 4, 00:24:11.804 "num_base_bdevs_discovered": 2, 00:24:11.804 "num_base_bdevs_operational": 4, 00:24:11.804 "base_bdevs_list": [ 00:24:11.804 { 00:24:11.804 "name": "BaseBdev1", 00:24:11.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:11.804 "is_configured": false, 00:24:11.804 "data_offset": 0, 00:24:11.804 "data_size": 0 00:24:11.804 }, 00:24:11.804 { 00:24:11.804 "name": null, 00:24:11.804 "uuid": "33153781-a967-44d8-ae18-df7448a17315", 00:24:11.804 "is_configured": false, 00:24:11.804 "data_offset": 0, 00:24:11.804 "data_size": 65536 00:24:11.804 }, 00:24:11.804 { 00:24:11.804 "name": "BaseBdev3", 00:24:11.804 "uuid": "153cc196-efd9-4e09-bc26-9edd25d489af", 00:24:11.804 "is_configured": true, 00:24:11.804 "data_offset": 0, 00:24:11.804 "data_size": 65536 00:24:11.804 }, 00:24:11.804 { 00:24:11.804 "name": "BaseBdev4", 00:24:11.804 "uuid": "55751095-a22a-47c3-a2ce-6e1841669348", 00:24:11.804 "is_configured": true, 00:24:11.804 "data_offset": 0, 00:24:11.804 "data_size": 65536 00:24:11.804 } 00:24:11.804 ] 00:24:11.804 }' 00:24:11.804 13:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:11.804 13:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.372 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:12.372 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.372 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.372 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.372 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.372 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:24:12.372 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:24:12.372 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.372 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.372 [2024-11-20 13:44:15.200006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:12.372 BaseBdev1 00:24:12.372 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.372 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:24:12.372 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:24:12.372 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:12.372 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:12.372 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:12.372 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.373 [ 00:24:12.373 { 00:24:12.373 "name": "BaseBdev1", 00:24:12.373 "aliases": [ 00:24:12.373 "29deeb79-c660-404f-abec-1df35c0f3e85" 00:24:12.373 ], 00:24:12.373 "product_name": "Malloc disk", 00:24:12.373 "block_size": 512, 00:24:12.373 "num_blocks": 65536, 00:24:12.373 "uuid": "29deeb79-c660-404f-abec-1df35c0f3e85", 00:24:12.373 "assigned_rate_limits": { 00:24:12.373 "rw_ios_per_sec": 0, 00:24:12.373 "rw_mbytes_per_sec": 0, 00:24:12.373 "r_mbytes_per_sec": 0, 00:24:12.373 "w_mbytes_per_sec": 0 00:24:12.373 }, 00:24:12.373 "claimed": true, 00:24:12.373 "claim_type": "exclusive_write", 00:24:12.373 "zoned": false, 00:24:12.373 "supported_io_types": { 00:24:12.373 "read": true, 00:24:12.373 "write": true, 00:24:12.373 "unmap": true, 00:24:12.373 "flush": true, 00:24:12.373 "reset": true, 00:24:12.373 "nvme_admin": false, 00:24:12.373 "nvme_io": false, 00:24:12.373 "nvme_io_md": false, 00:24:12.373 "write_zeroes": true, 00:24:12.373 "zcopy": true, 00:24:12.373 "get_zone_info": false, 00:24:12.373 "zone_management": false, 00:24:12.373 "zone_append": false, 00:24:12.373 "compare": false, 00:24:12.373 "compare_and_write": false, 00:24:12.373 "abort": true, 00:24:12.373 "seek_hole": false, 00:24:12.373 "seek_data": false, 00:24:12.373 "copy": true, 00:24:12.373 "nvme_iov_md": false 00:24:12.373 }, 00:24:12.373 "memory_domains": [ 00:24:12.373 { 00:24:12.373 "dma_device_id": "system", 00:24:12.373 "dma_device_type": 1 00:24:12.373 }, 00:24:12.373 { 00:24:12.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:12.373 "dma_device_type": 2 00:24:12.373 } 00:24:12.373 ], 00:24:12.373 "driver_specific": {} 00:24:12.373 } 00:24:12.373 ] 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:12.373 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.631 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:12.631 "name": "Existed_Raid", 00:24:12.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.631 "strip_size_kb": 64, 00:24:12.631 "state": "configuring", 00:24:12.631 "raid_level": "raid0", 00:24:12.631 "superblock": false, 00:24:12.631 "num_base_bdevs": 4, 00:24:12.631 "num_base_bdevs_discovered": 3, 00:24:12.631 "num_base_bdevs_operational": 4, 00:24:12.631 "base_bdevs_list": [ 00:24:12.631 { 00:24:12.631 "name": "BaseBdev1", 00:24:12.631 "uuid": "29deeb79-c660-404f-abec-1df35c0f3e85", 00:24:12.631 "is_configured": true, 00:24:12.631 "data_offset": 0, 00:24:12.631 "data_size": 65536 00:24:12.631 }, 00:24:12.631 { 00:24:12.631 "name": null, 00:24:12.631 "uuid": "33153781-a967-44d8-ae18-df7448a17315", 00:24:12.631 "is_configured": false, 00:24:12.631 "data_offset": 0, 00:24:12.631 "data_size": 65536 00:24:12.631 }, 00:24:12.631 { 00:24:12.631 "name": "BaseBdev3", 00:24:12.631 "uuid": "153cc196-efd9-4e09-bc26-9edd25d489af", 00:24:12.631 "is_configured": true, 00:24:12.631 "data_offset": 0, 00:24:12.631 "data_size": 65536 00:24:12.631 }, 00:24:12.631 { 00:24:12.631 "name": "BaseBdev4", 00:24:12.631 "uuid": "55751095-a22a-47c3-a2ce-6e1841669348", 00:24:12.631 "is_configured": true, 00:24:12.631 "data_offset": 0, 00:24:12.631 "data_size": 65536 00:24:12.631 } 00:24:12.631 ] 00:24:12.631 }' 00:24:12.631 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:12.631 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.890 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.890 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.890 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.890 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:12.890 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.150 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:24:13.150 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:24:13.150 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.150 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.150 [2024-11-20 13:44:15.816752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:13.150 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.150 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:13.150 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:13.150 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:13.150 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:13.150 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:13.150 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:13.150 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:13.150 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:13.150 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:13.150 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:13.150 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:13.150 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:13.150 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.150 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.150 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.150 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:13.150 "name": "Existed_Raid", 00:24:13.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.150 "strip_size_kb": 64, 00:24:13.150 "state": "configuring", 00:24:13.150 "raid_level": "raid0", 00:24:13.150 "superblock": false, 00:24:13.150 "num_base_bdevs": 4, 00:24:13.150 "num_base_bdevs_discovered": 2, 00:24:13.150 "num_base_bdevs_operational": 4, 00:24:13.150 "base_bdevs_list": [ 00:24:13.150 { 00:24:13.150 "name": "BaseBdev1", 00:24:13.150 "uuid": "29deeb79-c660-404f-abec-1df35c0f3e85", 00:24:13.150 "is_configured": true, 00:24:13.150 "data_offset": 0, 00:24:13.150 "data_size": 65536 00:24:13.150 }, 00:24:13.150 { 00:24:13.150 "name": null, 00:24:13.150 "uuid": "33153781-a967-44d8-ae18-df7448a17315", 00:24:13.150 "is_configured": false, 00:24:13.150 "data_offset": 0, 00:24:13.150 "data_size": 65536 00:24:13.150 }, 00:24:13.150 { 00:24:13.150 "name": null, 00:24:13.150 "uuid": "153cc196-efd9-4e09-bc26-9edd25d489af", 00:24:13.150 "is_configured": false, 00:24:13.150 "data_offset": 0, 00:24:13.150 "data_size": 65536 00:24:13.150 }, 00:24:13.150 { 00:24:13.150 "name": "BaseBdev4", 00:24:13.150 "uuid": "55751095-a22a-47c3-a2ce-6e1841669348", 00:24:13.150 "is_configured": true, 00:24:13.150 "data_offset": 0, 00:24:13.150 "data_size": 65536 00:24:13.150 } 00:24:13.150 ] 00:24:13.150 }' 00:24:13.150 13:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:13.150 13:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.540 [2024-11-20 13:44:16.408868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:13.540 13:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.798 13:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:13.798 "name": "Existed_Raid", 00:24:13.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.798 "strip_size_kb": 64, 00:24:13.798 "state": "configuring", 00:24:13.798 "raid_level": "raid0", 00:24:13.798 "superblock": false, 00:24:13.798 "num_base_bdevs": 4, 00:24:13.798 "num_base_bdevs_discovered": 3, 00:24:13.798 "num_base_bdevs_operational": 4, 00:24:13.798 "base_bdevs_list": [ 00:24:13.798 { 00:24:13.798 "name": "BaseBdev1", 00:24:13.798 "uuid": "29deeb79-c660-404f-abec-1df35c0f3e85", 00:24:13.798 "is_configured": true, 00:24:13.798 "data_offset": 0, 00:24:13.798 "data_size": 65536 00:24:13.798 }, 00:24:13.798 { 00:24:13.798 "name": null, 00:24:13.798 "uuid": "33153781-a967-44d8-ae18-df7448a17315", 00:24:13.798 "is_configured": false, 00:24:13.798 "data_offset": 0, 00:24:13.798 "data_size": 65536 00:24:13.798 }, 00:24:13.798 { 00:24:13.798 "name": "BaseBdev3", 00:24:13.798 "uuid": "153cc196-efd9-4e09-bc26-9edd25d489af", 00:24:13.798 "is_configured": true, 00:24:13.798 "data_offset": 0, 00:24:13.798 "data_size": 65536 00:24:13.798 }, 00:24:13.798 { 00:24:13.798 "name": "BaseBdev4", 00:24:13.798 "uuid": "55751095-a22a-47c3-a2ce-6e1841669348", 00:24:13.798 "is_configured": true, 00:24:13.798 "data_offset": 0, 00:24:13.798 "data_size": 65536 00:24:13.798 } 00:24:13.798 ] 00:24:13.798 }' 00:24:13.798 13:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:13.798 13:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.056 13:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:14.056 13:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.056 13:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.056 13:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.056 13:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.316 13:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:24:14.316 13:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:14.316 13:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.316 13:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.316 [2024-11-20 13:44:16.993073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:14.316 13:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.316 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:14.316 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:14.316 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:14.316 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:14.316 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:14.316 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:14.316 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:14.316 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:14.316 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:14.316 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:14.316 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.316 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:14.316 13:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.316 13:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.316 13:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.316 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:14.316 "name": "Existed_Raid", 00:24:14.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.316 "strip_size_kb": 64, 00:24:14.316 "state": "configuring", 00:24:14.316 "raid_level": "raid0", 00:24:14.316 "superblock": false, 00:24:14.316 "num_base_bdevs": 4, 00:24:14.316 "num_base_bdevs_discovered": 2, 00:24:14.316 "num_base_bdevs_operational": 4, 00:24:14.316 "base_bdevs_list": [ 00:24:14.316 { 00:24:14.316 "name": null, 00:24:14.316 "uuid": "29deeb79-c660-404f-abec-1df35c0f3e85", 00:24:14.316 "is_configured": false, 00:24:14.316 "data_offset": 0, 00:24:14.316 "data_size": 65536 00:24:14.316 }, 00:24:14.316 { 00:24:14.316 "name": null, 00:24:14.316 "uuid": "33153781-a967-44d8-ae18-df7448a17315", 00:24:14.316 "is_configured": false, 00:24:14.316 "data_offset": 0, 00:24:14.316 "data_size": 65536 00:24:14.316 }, 00:24:14.316 { 00:24:14.316 "name": "BaseBdev3", 00:24:14.316 "uuid": "153cc196-efd9-4e09-bc26-9edd25d489af", 00:24:14.316 "is_configured": true, 00:24:14.316 "data_offset": 0, 00:24:14.316 "data_size": 65536 00:24:14.316 }, 00:24:14.316 { 00:24:14.316 "name": "BaseBdev4", 00:24:14.316 "uuid": "55751095-a22a-47c3-a2ce-6e1841669348", 00:24:14.316 "is_configured": true, 00:24:14.316 "data_offset": 0, 00:24:14.316 "data_size": 65536 00:24:14.316 } 00:24:14.316 ] 00:24:14.316 }' 00:24:14.316 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:14.316 13:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.883 [2024-11-20 13:44:17.666838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:14.883 "name": "Existed_Raid", 00:24:14.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.883 "strip_size_kb": 64, 00:24:14.883 "state": "configuring", 00:24:14.883 "raid_level": "raid0", 00:24:14.883 "superblock": false, 00:24:14.883 "num_base_bdevs": 4, 00:24:14.883 "num_base_bdevs_discovered": 3, 00:24:14.883 "num_base_bdevs_operational": 4, 00:24:14.883 "base_bdevs_list": [ 00:24:14.883 { 00:24:14.883 "name": null, 00:24:14.883 "uuid": "29deeb79-c660-404f-abec-1df35c0f3e85", 00:24:14.883 "is_configured": false, 00:24:14.883 "data_offset": 0, 00:24:14.883 "data_size": 65536 00:24:14.883 }, 00:24:14.883 { 00:24:14.883 "name": "BaseBdev2", 00:24:14.883 "uuid": "33153781-a967-44d8-ae18-df7448a17315", 00:24:14.883 "is_configured": true, 00:24:14.883 "data_offset": 0, 00:24:14.883 "data_size": 65536 00:24:14.883 }, 00:24:14.883 { 00:24:14.883 "name": "BaseBdev3", 00:24:14.883 "uuid": "153cc196-efd9-4e09-bc26-9edd25d489af", 00:24:14.883 "is_configured": true, 00:24:14.883 "data_offset": 0, 00:24:14.883 "data_size": 65536 00:24:14.883 }, 00:24:14.883 { 00:24:14.883 "name": "BaseBdev4", 00:24:14.883 "uuid": "55751095-a22a-47c3-a2ce-6e1841669348", 00:24:14.883 "is_configured": true, 00:24:14.883 "data_offset": 0, 00:24:14.883 "data_size": 65536 00:24:14.883 } 00:24:14.883 ] 00:24:14.883 }' 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:14.883 13:44:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.451 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:15.451 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:15.451 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.451 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.451 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.451 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:24:15.451 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:15.451 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:15.451 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.451 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.451 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.451 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 29deeb79-c660-404f-abec-1df35c0f3e85 00:24:15.451 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.451 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.710 [2024-11-20 13:44:18.376987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:15.710 [2024-11-20 13:44:18.377072] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:15.710 [2024-11-20 13:44:18.377084] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:24:15.710 [2024-11-20 13:44:18.377417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:24:15.710 [2024-11-20 13:44:18.377599] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:15.710 [2024-11-20 13:44:18.377620] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:24:15.710 [2024-11-20 13:44:18.377966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:15.710 NewBaseBdev 00:24:15.710 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.710 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:24:15.710 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:24:15.710 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:15.710 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:15.710 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:15.710 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:15.710 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:15.710 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.710 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.710 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.710 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:15.710 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.710 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.710 [ 00:24:15.710 { 00:24:15.710 "name": "NewBaseBdev", 00:24:15.710 "aliases": [ 00:24:15.710 "29deeb79-c660-404f-abec-1df35c0f3e85" 00:24:15.710 ], 00:24:15.710 "product_name": "Malloc disk", 00:24:15.710 "block_size": 512, 00:24:15.710 "num_blocks": 65536, 00:24:15.710 "uuid": "29deeb79-c660-404f-abec-1df35c0f3e85", 00:24:15.710 "assigned_rate_limits": { 00:24:15.710 "rw_ios_per_sec": 0, 00:24:15.710 "rw_mbytes_per_sec": 0, 00:24:15.710 "r_mbytes_per_sec": 0, 00:24:15.710 "w_mbytes_per_sec": 0 00:24:15.710 }, 00:24:15.710 "claimed": true, 00:24:15.710 "claim_type": "exclusive_write", 00:24:15.711 "zoned": false, 00:24:15.711 "supported_io_types": { 00:24:15.711 "read": true, 00:24:15.711 "write": true, 00:24:15.711 "unmap": true, 00:24:15.711 "flush": true, 00:24:15.711 "reset": true, 00:24:15.711 "nvme_admin": false, 00:24:15.711 "nvme_io": false, 00:24:15.711 "nvme_io_md": false, 00:24:15.711 "write_zeroes": true, 00:24:15.711 "zcopy": true, 00:24:15.711 "get_zone_info": false, 00:24:15.711 "zone_management": false, 00:24:15.711 "zone_append": false, 00:24:15.711 "compare": false, 00:24:15.711 "compare_and_write": false, 00:24:15.711 "abort": true, 00:24:15.711 "seek_hole": false, 00:24:15.711 "seek_data": false, 00:24:15.711 "copy": true, 00:24:15.711 "nvme_iov_md": false 00:24:15.711 }, 00:24:15.711 "memory_domains": [ 00:24:15.711 { 00:24:15.711 "dma_device_id": "system", 00:24:15.711 "dma_device_type": 1 00:24:15.711 }, 00:24:15.711 { 00:24:15.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:15.711 "dma_device_type": 2 00:24:15.711 } 00:24:15.711 ], 00:24:15.711 "driver_specific": {} 00:24:15.711 } 00:24:15.711 ] 00:24:15.711 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.711 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:15.711 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:24:15.711 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:15.711 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:15.711 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:15.711 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:15.711 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:15.711 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:15.711 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:15.711 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:15.711 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:15.711 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:15.711 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.711 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:15.711 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.711 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.711 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:15.711 "name": "Existed_Raid", 00:24:15.711 "uuid": "ecc356b1-5f1c-4b12-8c74-1085b5490b72", 00:24:15.711 "strip_size_kb": 64, 00:24:15.711 "state": "online", 00:24:15.711 "raid_level": "raid0", 00:24:15.711 "superblock": false, 00:24:15.711 "num_base_bdevs": 4, 00:24:15.711 "num_base_bdevs_discovered": 4, 00:24:15.711 "num_base_bdevs_operational": 4, 00:24:15.711 "base_bdevs_list": [ 00:24:15.711 { 00:24:15.711 "name": "NewBaseBdev", 00:24:15.711 "uuid": "29deeb79-c660-404f-abec-1df35c0f3e85", 00:24:15.711 "is_configured": true, 00:24:15.711 "data_offset": 0, 00:24:15.711 "data_size": 65536 00:24:15.711 }, 00:24:15.711 { 00:24:15.711 "name": "BaseBdev2", 00:24:15.711 "uuid": "33153781-a967-44d8-ae18-df7448a17315", 00:24:15.711 "is_configured": true, 00:24:15.711 "data_offset": 0, 00:24:15.711 "data_size": 65536 00:24:15.711 }, 00:24:15.711 { 00:24:15.711 "name": "BaseBdev3", 00:24:15.711 "uuid": "153cc196-efd9-4e09-bc26-9edd25d489af", 00:24:15.711 "is_configured": true, 00:24:15.711 "data_offset": 0, 00:24:15.711 "data_size": 65536 00:24:15.711 }, 00:24:15.711 { 00:24:15.711 "name": "BaseBdev4", 00:24:15.711 "uuid": "55751095-a22a-47c3-a2ce-6e1841669348", 00:24:15.711 "is_configured": true, 00:24:15.711 "data_offset": 0, 00:24:15.711 "data_size": 65536 00:24:15.711 } 00:24:15.711 ] 00:24:15.711 }' 00:24:15.711 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:15.711 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.279 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:24:16.279 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:16.279 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:16.279 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:16.279 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:16.279 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:16.279 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:16.279 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:16.279 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.279 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.279 [2024-11-20 13:44:18.949651] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:16.279 13:44:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.279 13:44:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:16.279 "name": "Existed_Raid", 00:24:16.279 "aliases": [ 00:24:16.279 "ecc356b1-5f1c-4b12-8c74-1085b5490b72" 00:24:16.279 ], 00:24:16.279 "product_name": "Raid Volume", 00:24:16.279 "block_size": 512, 00:24:16.279 "num_blocks": 262144, 00:24:16.279 "uuid": "ecc356b1-5f1c-4b12-8c74-1085b5490b72", 00:24:16.279 "assigned_rate_limits": { 00:24:16.279 "rw_ios_per_sec": 0, 00:24:16.279 "rw_mbytes_per_sec": 0, 00:24:16.279 "r_mbytes_per_sec": 0, 00:24:16.279 "w_mbytes_per_sec": 0 00:24:16.279 }, 00:24:16.279 "claimed": false, 00:24:16.279 "zoned": false, 00:24:16.279 "supported_io_types": { 00:24:16.279 "read": true, 00:24:16.279 "write": true, 00:24:16.279 "unmap": true, 00:24:16.279 "flush": true, 00:24:16.279 "reset": true, 00:24:16.279 "nvme_admin": false, 00:24:16.279 "nvme_io": false, 00:24:16.279 "nvme_io_md": false, 00:24:16.279 "write_zeroes": true, 00:24:16.279 "zcopy": false, 00:24:16.279 "get_zone_info": false, 00:24:16.279 "zone_management": false, 00:24:16.279 "zone_append": false, 00:24:16.279 "compare": false, 00:24:16.279 "compare_and_write": false, 00:24:16.279 "abort": false, 00:24:16.279 "seek_hole": false, 00:24:16.279 "seek_data": false, 00:24:16.279 "copy": false, 00:24:16.279 "nvme_iov_md": false 00:24:16.279 }, 00:24:16.279 "memory_domains": [ 00:24:16.279 { 00:24:16.279 "dma_device_id": "system", 00:24:16.279 "dma_device_type": 1 00:24:16.279 }, 00:24:16.279 { 00:24:16.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:16.279 "dma_device_type": 2 00:24:16.279 }, 00:24:16.279 { 00:24:16.279 "dma_device_id": "system", 00:24:16.279 "dma_device_type": 1 00:24:16.279 }, 00:24:16.279 { 00:24:16.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:16.279 "dma_device_type": 2 00:24:16.279 }, 00:24:16.279 { 00:24:16.279 "dma_device_id": "system", 00:24:16.279 "dma_device_type": 1 00:24:16.279 }, 00:24:16.279 { 00:24:16.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:16.279 "dma_device_type": 2 00:24:16.279 }, 00:24:16.279 { 00:24:16.279 "dma_device_id": "system", 00:24:16.279 "dma_device_type": 1 00:24:16.279 }, 00:24:16.279 { 00:24:16.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:16.279 "dma_device_type": 2 00:24:16.279 } 00:24:16.279 ], 00:24:16.279 "driver_specific": { 00:24:16.279 "raid": { 00:24:16.279 "uuid": "ecc356b1-5f1c-4b12-8c74-1085b5490b72", 00:24:16.279 "strip_size_kb": 64, 00:24:16.279 "state": "online", 00:24:16.279 "raid_level": "raid0", 00:24:16.279 "superblock": false, 00:24:16.279 "num_base_bdevs": 4, 00:24:16.279 "num_base_bdevs_discovered": 4, 00:24:16.279 "num_base_bdevs_operational": 4, 00:24:16.279 "base_bdevs_list": [ 00:24:16.279 { 00:24:16.279 "name": "NewBaseBdev", 00:24:16.279 "uuid": "29deeb79-c660-404f-abec-1df35c0f3e85", 00:24:16.279 "is_configured": true, 00:24:16.279 "data_offset": 0, 00:24:16.279 "data_size": 65536 00:24:16.279 }, 00:24:16.279 { 00:24:16.279 "name": "BaseBdev2", 00:24:16.279 "uuid": "33153781-a967-44d8-ae18-df7448a17315", 00:24:16.279 "is_configured": true, 00:24:16.279 "data_offset": 0, 00:24:16.279 "data_size": 65536 00:24:16.279 }, 00:24:16.279 { 00:24:16.279 "name": "BaseBdev3", 00:24:16.279 "uuid": "153cc196-efd9-4e09-bc26-9edd25d489af", 00:24:16.279 "is_configured": true, 00:24:16.279 "data_offset": 0, 00:24:16.279 "data_size": 65536 00:24:16.279 }, 00:24:16.279 { 00:24:16.279 "name": "BaseBdev4", 00:24:16.279 "uuid": "55751095-a22a-47c3-a2ce-6e1841669348", 00:24:16.279 "is_configured": true, 00:24:16.279 "data_offset": 0, 00:24:16.279 "data_size": 65536 00:24:16.279 } 00:24:16.279 ] 00:24:16.279 } 00:24:16.279 } 00:24:16.279 }' 00:24:16.279 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:16.279 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:24:16.279 BaseBdev2 00:24:16.279 BaseBdev3 00:24:16.279 BaseBdev4' 00:24:16.279 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:16.279 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:16.279 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:16.279 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:24:16.279 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.279 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.279 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:16.279 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.279 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:16.279 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:16.279 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:16.279 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:16.279 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.279 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.279 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:16.279 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.538 [2024-11-20 13:44:19.309284] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:16.538 [2024-11-20 13:44:19.309324] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:16.538 [2024-11-20 13:44:19.309442] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:16.538 [2024-11-20 13:44:19.309538] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:16.538 [2024-11-20 13:44:19.309555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69646 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69646 ']' 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69646 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69646 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69646' 00:24:16.538 killing process with pid 69646 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69646 00:24:16.538 [2024-11-20 13:44:19.349385] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:16.538 13:44:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69646 00:24:17.106 [2024-11-20 13:44:19.713388] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:24:18.042 00:24:18.042 real 0m12.943s 00:24:18.042 user 0m21.449s 00:24:18.042 sys 0m1.787s 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.042 ************************************ 00:24:18.042 END TEST raid_state_function_test 00:24:18.042 ************************************ 00:24:18.042 13:44:20 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:24:18.042 13:44:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:18.042 13:44:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:18.042 13:44:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:18.042 ************************************ 00:24:18.042 START TEST raid_state_function_test_sb 00:24:18.042 ************************************ 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:24:18.042 Process raid pid: 70330 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70330 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70330' 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70330 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70330 ']' 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:18.042 13:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.042 [2024-11-20 13:44:20.928081] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:24:18.042 [2024-11-20 13:44:20.928243] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.301 [2024-11-20 13:44:21.105727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.559 [2024-11-20 13:44:21.239039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.559 [2024-11-20 13:44:21.448177] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:18.559 [2024-11-20 13:44:21.448461] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:19.126 13:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:19.126 13:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:24:19.126 13:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:19.126 13:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.126 13:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.126 [2024-11-20 13:44:21.952140] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:19.126 [2024-11-20 13:44:21.952208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:19.126 [2024-11-20 13:44:21.952226] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:19.126 [2024-11-20 13:44:21.952243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:19.126 [2024-11-20 13:44:21.952254] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:19.126 [2024-11-20 13:44:21.952269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:19.126 [2024-11-20 13:44:21.952279] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:19.126 [2024-11-20 13:44:21.952294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:19.126 13:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.126 13:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:19.126 13:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:19.126 13:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:19.126 13:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:19.126 13:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:19.126 13:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:19.126 13:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:19.126 13:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:19.126 13:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:19.126 13:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:19.126 13:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.126 13:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.126 13:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:19.126 13:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.126 13:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.126 13:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:19.126 "name": "Existed_Raid", 00:24:19.126 "uuid": "8042c2fd-be10-4e5d-aeed-548373d45064", 00:24:19.126 "strip_size_kb": 64, 00:24:19.126 "state": "configuring", 00:24:19.126 "raid_level": "raid0", 00:24:19.126 "superblock": true, 00:24:19.126 "num_base_bdevs": 4, 00:24:19.126 "num_base_bdevs_discovered": 0, 00:24:19.126 "num_base_bdevs_operational": 4, 00:24:19.126 "base_bdevs_list": [ 00:24:19.126 { 00:24:19.126 "name": "BaseBdev1", 00:24:19.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.126 "is_configured": false, 00:24:19.126 "data_offset": 0, 00:24:19.126 "data_size": 0 00:24:19.126 }, 00:24:19.126 { 00:24:19.126 "name": "BaseBdev2", 00:24:19.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.126 "is_configured": false, 00:24:19.126 "data_offset": 0, 00:24:19.126 "data_size": 0 00:24:19.126 }, 00:24:19.126 { 00:24:19.126 "name": "BaseBdev3", 00:24:19.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.126 "is_configured": false, 00:24:19.126 "data_offset": 0, 00:24:19.126 "data_size": 0 00:24:19.126 }, 00:24:19.126 { 00:24:19.126 "name": "BaseBdev4", 00:24:19.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.126 "is_configured": false, 00:24:19.126 "data_offset": 0, 00:24:19.126 "data_size": 0 00:24:19.126 } 00:24:19.126 ] 00:24:19.126 }' 00:24:19.126 13:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:19.126 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.762 [2024-11-20 13:44:22.420227] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:19.762 [2024-11-20 13:44:22.420443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.762 [2024-11-20 13:44:22.428230] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:19.762 [2024-11-20 13:44:22.428304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:19.762 [2024-11-20 13:44:22.428324] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:19.762 [2024-11-20 13:44:22.428345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:19.762 [2024-11-20 13:44:22.428357] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:19.762 [2024-11-20 13:44:22.428375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:19.762 [2024-11-20 13:44:22.428388] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:19.762 [2024-11-20 13:44:22.428405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.762 [2024-11-20 13:44:22.480571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:19.762 BaseBdev1 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.762 [ 00:24:19.762 { 00:24:19.762 "name": "BaseBdev1", 00:24:19.762 "aliases": [ 00:24:19.762 "19493e00-76da-4db9-b31d-641bfccc2b43" 00:24:19.762 ], 00:24:19.762 "product_name": "Malloc disk", 00:24:19.762 "block_size": 512, 00:24:19.762 "num_blocks": 65536, 00:24:19.762 "uuid": "19493e00-76da-4db9-b31d-641bfccc2b43", 00:24:19.762 "assigned_rate_limits": { 00:24:19.762 "rw_ios_per_sec": 0, 00:24:19.762 "rw_mbytes_per_sec": 0, 00:24:19.762 "r_mbytes_per_sec": 0, 00:24:19.762 "w_mbytes_per_sec": 0 00:24:19.762 }, 00:24:19.762 "claimed": true, 00:24:19.762 "claim_type": "exclusive_write", 00:24:19.762 "zoned": false, 00:24:19.762 "supported_io_types": { 00:24:19.762 "read": true, 00:24:19.762 "write": true, 00:24:19.762 "unmap": true, 00:24:19.762 "flush": true, 00:24:19.762 "reset": true, 00:24:19.762 "nvme_admin": false, 00:24:19.762 "nvme_io": false, 00:24:19.762 "nvme_io_md": false, 00:24:19.762 "write_zeroes": true, 00:24:19.762 "zcopy": true, 00:24:19.762 "get_zone_info": false, 00:24:19.762 "zone_management": false, 00:24:19.762 "zone_append": false, 00:24:19.762 "compare": false, 00:24:19.762 "compare_and_write": false, 00:24:19.762 "abort": true, 00:24:19.762 "seek_hole": false, 00:24:19.762 "seek_data": false, 00:24:19.762 "copy": true, 00:24:19.762 "nvme_iov_md": false 00:24:19.762 }, 00:24:19.762 "memory_domains": [ 00:24:19.762 { 00:24:19.762 "dma_device_id": "system", 00:24:19.762 "dma_device_type": 1 00:24:19.762 }, 00:24:19.762 { 00:24:19.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:19.762 "dma_device_type": 2 00:24:19.762 } 00:24:19.762 ], 00:24:19.762 "driver_specific": {} 00:24:19.762 } 00:24:19.762 ] 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.762 13:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:19.762 "name": "Existed_Raid", 00:24:19.762 "uuid": "cbd3fa2a-9a3f-4335-a109-b0a751cf4a26", 00:24:19.762 "strip_size_kb": 64, 00:24:19.762 "state": "configuring", 00:24:19.762 "raid_level": "raid0", 00:24:19.762 "superblock": true, 00:24:19.762 "num_base_bdevs": 4, 00:24:19.762 "num_base_bdevs_discovered": 1, 00:24:19.762 "num_base_bdevs_operational": 4, 00:24:19.762 "base_bdevs_list": [ 00:24:19.762 { 00:24:19.762 "name": "BaseBdev1", 00:24:19.762 "uuid": "19493e00-76da-4db9-b31d-641bfccc2b43", 00:24:19.762 "is_configured": true, 00:24:19.762 "data_offset": 2048, 00:24:19.762 "data_size": 63488 00:24:19.762 }, 00:24:19.762 { 00:24:19.762 "name": "BaseBdev2", 00:24:19.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.762 "is_configured": false, 00:24:19.762 "data_offset": 0, 00:24:19.762 "data_size": 0 00:24:19.762 }, 00:24:19.762 { 00:24:19.762 "name": "BaseBdev3", 00:24:19.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.762 "is_configured": false, 00:24:19.762 "data_offset": 0, 00:24:19.762 "data_size": 0 00:24:19.762 }, 00:24:19.762 { 00:24:19.762 "name": "BaseBdev4", 00:24:19.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.762 "is_configured": false, 00:24:19.762 "data_offset": 0, 00:24:19.763 "data_size": 0 00:24:19.763 } 00:24:19.763 ] 00:24:19.763 }' 00:24:19.763 13:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:19.763 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.330 13:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:20.330 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.330 13:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.330 [2024-11-20 13:44:23.000758] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:20.330 [2024-11-20 13:44:23.000968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.330 [2024-11-20 13:44:23.008819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:20.330 [2024-11-20 13:44:23.011278] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:20.330 [2024-11-20 13:44:23.011341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:20.330 [2024-11-20 13:44:23.011359] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:20.330 [2024-11-20 13:44:23.011377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:20.330 [2024-11-20 13:44:23.011391] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:20.330 [2024-11-20 13:44:23.011405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:20.330 "name": "Existed_Raid", 00:24:20.330 "uuid": "4b09fc53-b764-43b4-a2ec-5f2c42135e0e", 00:24:20.330 "strip_size_kb": 64, 00:24:20.330 "state": "configuring", 00:24:20.330 "raid_level": "raid0", 00:24:20.330 "superblock": true, 00:24:20.330 "num_base_bdevs": 4, 00:24:20.330 "num_base_bdevs_discovered": 1, 00:24:20.330 "num_base_bdevs_operational": 4, 00:24:20.330 "base_bdevs_list": [ 00:24:20.330 { 00:24:20.330 "name": "BaseBdev1", 00:24:20.330 "uuid": "19493e00-76da-4db9-b31d-641bfccc2b43", 00:24:20.330 "is_configured": true, 00:24:20.330 "data_offset": 2048, 00:24:20.330 "data_size": 63488 00:24:20.330 }, 00:24:20.330 { 00:24:20.330 "name": "BaseBdev2", 00:24:20.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.330 "is_configured": false, 00:24:20.330 "data_offset": 0, 00:24:20.330 "data_size": 0 00:24:20.330 }, 00:24:20.330 { 00:24:20.330 "name": "BaseBdev3", 00:24:20.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.330 "is_configured": false, 00:24:20.330 "data_offset": 0, 00:24:20.330 "data_size": 0 00:24:20.330 }, 00:24:20.330 { 00:24:20.330 "name": "BaseBdev4", 00:24:20.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.330 "is_configured": false, 00:24:20.330 "data_offset": 0, 00:24:20.330 "data_size": 0 00:24:20.330 } 00:24:20.330 ] 00:24:20.330 }' 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:20.330 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.898 [2024-11-20 13:44:23.551162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:20.898 BaseBdev2 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.898 [ 00:24:20.898 { 00:24:20.898 "name": "BaseBdev2", 00:24:20.898 "aliases": [ 00:24:20.898 "e1f7c29e-f181-47fb-b3d2-7eaabebda41a" 00:24:20.898 ], 00:24:20.898 "product_name": "Malloc disk", 00:24:20.898 "block_size": 512, 00:24:20.898 "num_blocks": 65536, 00:24:20.898 "uuid": "e1f7c29e-f181-47fb-b3d2-7eaabebda41a", 00:24:20.898 "assigned_rate_limits": { 00:24:20.898 "rw_ios_per_sec": 0, 00:24:20.898 "rw_mbytes_per_sec": 0, 00:24:20.898 "r_mbytes_per_sec": 0, 00:24:20.898 "w_mbytes_per_sec": 0 00:24:20.898 }, 00:24:20.898 "claimed": true, 00:24:20.898 "claim_type": "exclusive_write", 00:24:20.898 "zoned": false, 00:24:20.898 "supported_io_types": { 00:24:20.898 "read": true, 00:24:20.898 "write": true, 00:24:20.898 "unmap": true, 00:24:20.898 "flush": true, 00:24:20.898 "reset": true, 00:24:20.898 "nvme_admin": false, 00:24:20.898 "nvme_io": false, 00:24:20.898 "nvme_io_md": false, 00:24:20.898 "write_zeroes": true, 00:24:20.898 "zcopy": true, 00:24:20.898 "get_zone_info": false, 00:24:20.898 "zone_management": false, 00:24:20.898 "zone_append": false, 00:24:20.898 "compare": false, 00:24:20.898 "compare_and_write": false, 00:24:20.898 "abort": true, 00:24:20.898 "seek_hole": false, 00:24:20.898 "seek_data": false, 00:24:20.898 "copy": true, 00:24:20.898 "nvme_iov_md": false 00:24:20.898 }, 00:24:20.898 "memory_domains": [ 00:24:20.898 { 00:24:20.898 "dma_device_id": "system", 00:24:20.898 "dma_device_type": 1 00:24:20.898 }, 00:24:20.898 { 00:24:20.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:20.898 "dma_device_type": 2 00:24:20.898 } 00:24:20.898 ], 00:24:20.898 "driver_specific": {} 00:24:20.898 } 00:24:20.898 ] 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:20.898 "name": "Existed_Raid", 00:24:20.898 "uuid": "4b09fc53-b764-43b4-a2ec-5f2c42135e0e", 00:24:20.898 "strip_size_kb": 64, 00:24:20.898 "state": "configuring", 00:24:20.898 "raid_level": "raid0", 00:24:20.898 "superblock": true, 00:24:20.898 "num_base_bdevs": 4, 00:24:20.898 "num_base_bdevs_discovered": 2, 00:24:20.898 "num_base_bdevs_operational": 4, 00:24:20.898 "base_bdevs_list": [ 00:24:20.898 { 00:24:20.898 "name": "BaseBdev1", 00:24:20.898 "uuid": "19493e00-76da-4db9-b31d-641bfccc2b43", 00:24:20.898 "is_configured": true, 00:24:20.898 "data_offset": 2048, 00:24:20.898 "data_size": 63488 00:24:20.898 }, 00:24:20.898 { 00:24:20.898 "name": "BaseBdev2", 00:24:20.898 "uuid": "e1f7c29e-f181-47fb-b3d2-7eaabebda41a", 00:24:20.898 "is_configured": true, 00:24:20.898 "data_offset": 2048, 00:24:20.898 "data_size": 63488 00:24:20.898 }, 00:24:20.898 { 00:24:20.898 "name": "BaseBdev3", 00:24:20.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.898 "is_configured": false, 00:24:20.898 "data_offset": 0, 00:24:20.898 "data_size": 0 00:24:20.898 }, 00:24:20.898 { 00:24:20.898 "name": "BaseBdev4", 00:24:20.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.898 "is_configured": false, 00:24:20.898 "data_offset": 0, 00:24:20.898 "data_size": 0 00:24:20.898 } 00:24:20.898 ] 00:24:20.898 }' 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:20.898 13:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.466 [2024-11-20 13:44:24.156986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:21.466 BaseBdev3 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.466 [ 00:24:21.466 { 00:24:21.466 "name": "BaseBdev3", 00:24:21.466 "aliases": [ 00:24:21.466 "227befa3-a285-456c-8d75-9f20103762e0" 00:24:21.466 ], 00:24:21.466 "product_name": "Malloc disk", 00:24:21.466 "block_size": 512, 00:24:21.466 "num_blocks": 65536, 00:24:21.466 "uuid": "227befa3-a285-456c-8d75-9f20103762e0", 00:24:21.466 "assigned_rate_limits": { 00:24:21.466 "rw_ios_per_sec": 0, 00:24:21.466 "rw_mbytes_per_sec": 0, 00:24:21.466 "r_mbytes_per_sec": 0, 00:24:21.466 "w_mbytes_per_sec": 0 00:24:21.466 }, 00:24:21.466 "claimed": true, 00:24:21.466 "claim_type": "exclusive_write", 00:24:21.466 "zoned": false, 00:24:21.466 "supported_io_types": { 00:24:21.466 "read": true, 00:24:21.466 "write": true, 00:24:21.466 "unmap": true, 00:24:21.466 "flush": true, 00:24:21.466 "reset": true, 00:24:21.466 "nvme_admin": false, 00:24:21.466 "nvme_io": false, 00:24:21.466 "nvme_io_md": false, 00:24:21.466 "write_zeroes": true, 00:24:21.466 "zcopy": true, 00:24:21.466 "get_zone_info": false, 00:24:21.466 "zone_management": false, 00:24:21.466 "zone_append": false, 00:24:21.466 "compare": false, 00:24:21.466 "compare_and_write": false, 00:24:21.466 "abort": true, 00:24:21.466 "seek_hole": false, 00:24:21.466 "seek_data": false, 00:24:21.466 "copy": true, 00:24:21.466 "nvme_iov_md": false 00:24:21.466 }, 00:24:21.466 "memory_domains": [ 00:24:21.466 { 00:24:21.466 "dma_device_id": "system", 00:24:21.466 "dma_device_type": 1 00:24:21.466 }, 00:24:21.466 { 00:24:21.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:21.466 "dma_device_type": 2 00:24:21.466 } 00:24:21.466 ], 00:24:21.466 "driver_specific": {} 00:24:21.466 } 00:24:21.466 ] 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.466 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:21.466 "name": "Existed_Raid", 00:24:21.466 "uuid": "4b09fc53-b764-43b4-a2ec-5f2c42135e0e", 00:24:21.466 "strip_size_kb": 64, 00:24:21.466 "state": "configuring", 00:24:21.467 "raid_level": "raid0", 00:24:21.467 "superblock": true, 00:24:21.467 "num_base_bdevs": 4, 00:24:21.467 "num_base_bdevs_discovered": 3, 00:24:21.467 "num_base_bdevs_operational": 4, 00:24:21.467 "base_bdevs_list": [ 00:24:21.467 { 00:24:21.467 "name": "BaseBdev1", 00:24:21.467 "uuid": "19493e00-76da-4db9-b31d-641bfccc2b43", 00:24:21.467 "is_configured": true, 00:24:21.467 "data_offset": 2048, 00:24:21.467 "data_size": 63488 00:24:21.467 }, 00:24:21.467 { 00:24:21.467 "name": "BaseBdev2", 00:24:21.467 "uuid": "e1f7c29e-f181-47fb-b3d2-7eaabebda41a", 00:24:21.467 "is_configured": true, 00:24:21.467 "data_offset": 2048, 00:24:21.467 "data_size": 63488 00:24:21.467 }, 00:24:21.467 { 00:24:21.467 "name": "BaseBdev3", 00:24:21.467 "uuid": "227befa3-a285-456c-8d75-9f20103762e0", 00:24:21.467 "is_configured": true, 00:24:21.467 "data_offset": 2048, 00:24:21.467 "data_size": 63488 00:24:21.467 }, 00:24:21.467 { 00:24:21.467 "name": "BaseBdev4", 00:24:21.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.467 "is_configured": false, 00:24:21.467 "data_offset": 0, 00:24:21.467 "data_size": 0 00:24:21.467 } 00:24:21.467 ] 00:24:21.467 }' 00:24:21.467 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:21.467 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.034 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:24:22.034 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.034 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.034 [2024-11-20 13:44:24.739976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:22.035 [2024-11-20 13:44:24.740320] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:22.035 [2024-11-20 13:44:24.740340] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:22.035 BaseBdev4 00:24:22.035 [2024-11-20 13:44:24.740697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:22.035 [2024-11-20 13:44:24.740886] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:22.035 [2024-11-20 13:44:24.740931] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:24:22.035 [2024-11-20 13:44:24.741108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.035 [ 00:24:22.035 { 00:24:22.035 "name": "BaseBdev4", 00:24:22.035 "aliases": [ 00:24:22.035 "362cce34-04ae-4731-8474-c5ade9352190" 00:24:22.035 ], 00:24:22.035 "product_name": "Malloc disk", 00:24:22.035 "block_size": 512, 00:24:22.035 "num_blocks": 65536, 00:24:22.035 "uuid": "362cce34-04ae-4731-8474-c5ade9352190", 00:24:22.035 "assigned_rate_limits": { 00:24:22.035 "rw_ios_per_sec": 0, 00:24:22.035 "rw_mbytes_per_sec": 0, 00:24:22.035 "r_mbytes_per_sec": 0, 00:24:22.035 "w_mbytes_per_sec": 0 00:24:22.035 }, 00:24:22.035 "claimed": true, 00:24:22.035 "claim_type": "exclusive_write", 00:24:22.035 "zoned": false, 00:24:22.035 "supported_io_types": { 00:24:22.035 "read": true, 00:24:22.035 "write": true, 00:24:22.035 "unmap": true, 00:24:22.035 "flush": true, 00:24:22.035 "reset": true, 00:24:22.035 "nvme_admin": false, 00:24:22.035 "nvme_io": false, 00:24:22.035 "nvme_io_md": false, 00:24:22.035 "write_zeroes": true, 00:24:22.035 "zcopy": true, 00:24:22.035 "get_zone_info": false, 00:24:22.035 "zone_management": false, 00:24:22.035 "zone_append": false, 00:24:22.035 "compare": false, 00:24:22.035 "compare_and_write": false, 00:24:22.035 "abort": true, 00:24:22.035 "seek_hole": false, 00:24:22.035 "seek_data": false, 00:24:22.035 "copy": true, 00:24:22.035 "nvme_iov_md": false 00:24:22.035 }, 00:24:22.035 "memory_domains": [ 00:24:22.035 { 00:24:22.035 "dma_device_id": "system", 00:24:22.035 "dma_device_type": 1 00:24:22.035 }, 00:24:22.035 { 00:24:22.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.035 "dma_device_type": 2 00:24:22.035 } 00:24:22.035 ], 00:24:22.035 "driver_specific": {} 00:24:22.035 } 00:24:22.035 ] 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:22.035 "name": "Existed_Raid", 00:24:22.035 "uuid": "4b09fc53-b764-43b4-a2ec-5f2c42135e0e", 00:24:22.035 "strip_size_kb": 64, 00:24:22.035 "state": "online", 00:24:22.035 "raid_level": "raid0", 00:24:22.035 "superblock": true, 00:24:22.035 "num_base_bdevs": 4, 00:24:22.035 "num_base_bdevs_discovered": 4, 00:24:22.035 "num_base_bdevs_operational": 4, 00:24:22.035 "base_bdevs_list": [ 00:24:22.035 { 00:24:22.035 "name": "BaseBdev1", 00:24:22.035 "uuid": "19493e00-76da-4db9-b31d-641bfccc2b43", 00:24:22.035 "is_configured": true, 00:24:22.035 "data_offset": 2048, 00:24:22.035 "data_size": 63488 00:24:22.035 }, 00:24:22.035 { 00:24:22.035 "name": "BaseBdev2", 00:24:22.035 "uuid": "e1f7c29e-f181-47fb-b3d2-7eaabebda41a", 00:24:22.035 "is_configured": true, 00:24:22.035 "data_offset": 2048, 00:24:22.035 "data_size": 63488 00:24:22.035 }, 00:24:22.035 { 00:24:22.035 "name": "BaseBdev3", 00:24:22.035 "uuid": "227befa3-a285-456c-8d75-9f20103762e0", 00:24:22.035 "is_configured": true, 00:24:22.035 "data_offset": 2048, 00:24:22.035 "data_size": 63488 00:24:22.035 }, 00:24:22.035 { 00:24:22.035 "name": "BaseBdev4", 00:24:22.035 "uuid": "362cce34-04ae-4731-8474-c5ade9352190", 00:24:22.035 "is_configured": true, 00:24:22.035 "data_offset": 2048, 00:24:22.035 "data_size": 63488 00:24:22.035 } 00:24:22.035 ] 00:24:22.035 }' 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:22.035 13:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.603 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:22.603 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:22.603 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:22.603 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:22.603 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:24:22.603 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:22.603 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:22.603 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:22.603 13:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.603 13:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.603 [2024-11-20 13:44:25.272618] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:22.603 13:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.603 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:22.603 "name": "Existed_Raid", 00:24:22.603 "aliases": [ 00:24:22.603 "4b09fc53-b764-43b4-a2ec-5f2c42135e0e" 00:24:22.603 ], 00:24:22.603 "product_name": "Raid Volume", 00:24:22.603 "block_size": 512, 00:24:22.603 "num_blocks": 253952, 00:24:22.603 "uuid": "4b09fc53-b764-43b4-a2ec-5f2c42135e0e", 00:24:22.603 "assigned_rate_limits": { 00:24:22.603 "rw_ios_per_sec": 0, 00:24:22.603 "rw_mbytes_per_sec": 0, 00:24:22.603 "r_mbytes_per_sec": 0, 00:24:22.603 "w_mbytes_per_sec": 0 00:24:22.603 }, 00:24:22.603 "claimed": false, 00:24:22.603 "zoned": false, 00:24:22.603 "supported_io_types": { 00:24:22.603 "read": true, 00:24:22.603 "write": true, 00:24:22.603 "unmap": true, 00:24:22.603 "flush": true, 00:24:22.603 "reset": true, 00:24:22.603 "nvme_admin": false, 00:24:22.603 "nvme_io": false, 00:24:22.603 "nvme_io_md": false, 00:24:22.603 "write_zeroes": true, 00:24:22.603 "zcopy": false, 00:24:22.603 "get_zone_info": false, 00:24:22.603 "zone_management": false, 00:24:22.603 "zone_append": false, 00:24:22.603 "compare": false, 00:24:22.603 "compare_and_write": false, 00:24:22.603 "abort": false, 00:24:22.603 "seek_hole": false, 00:24:22.603 "seek_data": false, 00:24:22.603 "copy": false, 00:24:22.603 "nvme_iov_md": false 00:24:22.603 }, 00:24:22.603 "memory_domains": [ 00:24:22.603 { 00:24:22.603 "dma_device_id": "system", 00:24:22.603 "dma_device_type": 1 00:24:22.603 }, 00:24:22.603 { 00:24:22.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.603 "dma_device_type": 2 00:24:22.603 }, 00:24:22.603 { 00:24:22.603 "dma_device_id": "system", 00:24:22.603 "dma_device_type": 1 00:24:22.603 }, 00:24:22.603 { 00:24:22.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.603 "dma_device_type": 2 00:24:22.603 }, 00:24:22.603 { 00:24:22.603 "dma_device_id": "system", 00:24:22.603 "dma_device_type": 1 00:24:22.603 }, 00:24:22.603 { 00:24:22.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.603 "dma_device_type": 2 00:24:22.603 }, 00:24:22.603 { 00:24:22.603 "dma_device_id": "system", 00:24:22.603 "dma_device_type": 1 00:24:22.603 }, 00:24:22.603 { 00:24:22.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.603 "dma_device_type": 2 00:24:22.603 } 00:24:22.603 ], 00:24:22.603 "driver_specific": { 00:24:22.603 "raid": { 00:24:22.603 "uuid": "4b09fc53-b764-43b4-a2ec-5f2c42135e0e", 00:24:22.603 "strip_size_kb": 64, 00:24:22.603 "state": "online", 00:24:22.603 "raid_level": "raid0", 00:24:22.603 "superblock": true, 00:24:22.603 "num_base_bdevs": 4, 00:24:22.603 "num_base_bdevs_discovered": 4, 00:24:22.603 "num_base_bdevs_operational": 4, 00:24:22.603 "base_bdevs_list": [ 00:24:22.603 { 00:24:22.603 "name": "BaseBdev1", 00:24:22.603 "uuid": "19493e00-76da-4db9-b31d-641bfccc2b43", 00:24:22.603 "is_configured": true, 00:24:22.603 "data_offset": 2048, 00:24:22.603 "data_size": 63488 00:24:22.603 }, 00:24:22.603 { 00:24:22.603 "name": "BaseBdev2", 00:24:22.603 "uuid": "e1f7c29e-f181-47fb-b3d2-7eaabebda41a", 00:24:22.603 "is_configured": true, 00:24:22.603 "data_offset": 2048, 00:24:22.603 "data_size": 63488 00:24:22.603 }, 00:24:22.603 { 00:24:22.603 "name": "BaseBdev3", 00:24:22.603 "uuid": "227befa3-a285-456c-8d75-9f20103762e0", 00:24:22.603 "is_configured": true, 00:24:22.603 "data_offset": 2048, 00:24:22.603 "data_size": 63488 00:24:22.603 }, 00:24:22.603 { 00:24:22.603 "name": "BaseBdev4", 00:24:22.603 "uuid": "362cce34-04ae-4731-8474-c5ade9352190", 00:24:22.603 "is_configured": true, 00:24:22.603 "data_offset": 2048, 00:24:22.603 "data_size": 63488 00:24:22.603 } 00:24:22.603 ] 00:24:22.603 } 00:24:22.603 } 00:24:22.603 }' 00:24:22.604 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:22.604 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:22.604 BaseBdev2 00:24:22.604 BaseBdev3 00:24:22.604 BaseBdev4' 00:24:22.604 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:22.604 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:22.604 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:22.604 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:22.604 13:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.604 13:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.604 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:22.604 13:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.604 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:22.604 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:22.604 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:22.604 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:22.604 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:22.604 13:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.604 13:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.864 [2024-11-20 13:44:25.664690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:22.864 [2024-11-20 13:44:25.664731] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:22.864 [2024-11-20 13:44:25.664799] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.864 13:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.137 13:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.137 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:23.137 "name": "Existed_Raid", 00:24:23.137 "uuid": "4b09fc53-b764-43b4-a2ec-5f2c42135e0e", 00:24:23.137 "strip_size_kb": 64, 00:24:23.137 "state": "offline", 00:24:23.137 "raid_level": "raid0", 00:24:23.137 "superblock": true, 00:24:23.137 "num_base_bdevs": 4, 00:24:23.137 "num_base_bdevs_discovered": 3, 00:24:23.137 "num_base_bdevs_operational": 3, 00:24:23.137 "base_bdevs_list": [ 00:24:23.137 { 00:24:23.137 "name": null, 00:24:23.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.137 "is_configured": false, 00:24:23.137 "data_offset": 0, 00:24:23.137 "data_size": 63488 00:24:23.137 }, 00:24:23.137 { 00:24:23.137 "name": "BaseBdev2", 00:24:23.137 "uuid": "e1f7c29e-f181-47fb-b3d2-7eaabebda41a", 00:24:23.137 "is_configured": true, 00:24:23.137 "data_offset": 2048, 00:24:23.137 "data_size": 63488 00:24:23.137 }, 00:24:23.137 { 00:24:23.137 "name": "BaseBdev3", 00:24:23.137 "uuid": "227befa3-a285-456c-8d75-9f20103762e0", 00:24:23.137 "is_configured": true, 00:24:23.137 "data_offset": 2048, 00:24:23.137 "data_size": 63488 00:24:23.137 }, 00:24:23.137 { 00:24:23.137 "name": "BaseBdev4", 00:24:23.137 "uuid": "362cce34-04ae-4731-8474-c5ade9352190", 00:24:23.137 "is_configured": true, 00:24:23.137 "data_offset": 2048, 00:24:23.137 "data_size": 63488 00:24:23.137 } 00:24:23.137 ] 00:24:23.137 }' 00:24:23.137 13:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:23.137 13:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.396 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:23.396 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:23.396 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:23.396 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.396 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:23.396 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.396 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.654 [2024-11-20 13:44:26.319484] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.654 [2024-11-20 13:44:26.470816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:23.654 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.655 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.913 [2024-11-20 13:44:26.625321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:23.913 [2024-11-20 13:44:26.625400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.913 BaseBdev2 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:24:23.913 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:23.914 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:23.914 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:23.914 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:23.914 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:23.914 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.914 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.914 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.914 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:23.914 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.914 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.174 [ 00:24:24.174 { 00:24:24.174 "name": "BaseBdev2", 00:24:24.174 "aliases": [ 00:24:24.174 "0747e568-c45b-4ca1-8777-7063f35a1ef0" 00:24:24.174 ], 00:24:24.174 "product_name": "Malloc disk", 00:24:24.174 "block_size": 512, 00:24:24.174 "num_blocks": 65536, 00:24:24.174 "uuid": "0747e568-c45b-4ca1-8777-7063f35a1ef0", 00:24:24.174 "assigned_rate_limits": { 00:24:24.174 "rw_ios_per_sec": 0, 00:24:24.174 "rw_mbytes_per_sec": 0, 00:24:24.174 "r_mbytes_per_sec": 0, 00:24:24.174 "w_mbytes_per_sec": 0 00:24:24.174 }, 00:24:24.174 "claimed": false, 00:24:24.174 "zoned": false, 00:24:24.174 "supported_io_types": { 00:24:24.174 "read": true, 00:24:24.174 "write": true, 00:24:24.174 "unmap": true, 00:24:24.174 "flush": true, 00:24:24.174 "reset": true, 00:24:24.174 "nvme_admin": false, 00:24:24.174 "nvme_io": false, 00:24:24.174 "nvme_io_md": false, 00:24:24.174 "write_zeroes": true, 00:24:24.174 "zcopy": true, 00:24:24.174 "get_zone_info": false, 00:24:24.174 "zone_management": false, 00:24:24.174 "zone_append": false, 00:24:24.174 "compare": false, 00:24:24.174 "compare_and_write": false, 00:24:24.174 "abort": true, 00:24:24.174 "seek_hole": false, 00:24:24.174 "seek_data": false, 00:24:24.174 "copy": true, 00:24:24.174 "nvme_iov_md": false 00:24:24.174 }, 00:24:24.174 "memory_domains": [ 00:24:24.174 { 00:24:24.174 "dma_device_id": "system", 00:24:24.174 "dma_device_type": 1 00:24:24.174 }, 00:24:24.174 { 00:24:24.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:24.174 "dma_device_type": 2 00:24:24.174 } 00:24:24.174 ], 00:24:24.174 "driver_specific": {} 00:24:24.174 } 00:24:24.174 ] 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.174 BaseBdev3 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.174 [ 00:24:24.174 { 00:24:24.174 "name": "BaseBdev3", 00:24:24.174 "aliases": [ 00:24:24.174 "a6da9f1a-1bed-44c7-86a9-fe6bf2135090" 00:24:24.174 ], 00:24:24.174 "product_name": "Malloc disk", 00:24:24.174 "block_size": 512, 00:24:24.174 "num_blocks": 65536, 00:24:24.174 "uuid": "a6da9f1a-1bed-44c7-86a9-fe6bf2135090", 00:24:24.174 "assigned_rate_limits": { 00:24:24.174 "rw_ios_per_sec": 0, 00:24:24.174 "rw_mbytes_per_sec": 0, 00:24:24.174 "r_mbytes_per_sec": 0, 00:24:24.174 "w_mbytes_per_sec": 0 00:24:24.174 }, 00:24:24.174 "claimed": false, 00:24:24.174 "zoned": false, 00:24:24.174 "supported_io_types": { 00:24:24.174 "read": true, 00:24:24.174 "write": true, 00:24:24.174 "unmap": true, 00:24:24.174 "flush": true, 00:24:24.174 "reset": true, 00:24:24.174 "nvme_admin": false, 00:24:24.174 "nvme_io": false, 00:24:24.174 "nvme_io_md": false, 00:24:24.174 "write_zeroes": true, 00:24:24.174 "zcopy": true, 00:24:24.174 "get_zone_info": false, 00:24:24.174 "zone_management": false, 00:24:24.174 "zone_append": false, 00:24:24.174 "compare": false, 00:24:24.174 "compare_and_write": false, 00:24:24.174 "abort": true, 00:24:24.174 "seek_hole": false, 00:24:24.174 "seek_data": false, 00:24:24.174 "copy": true, 00:24:24.174 "nvme_iov_md": false 00:24:24.174 }, 00:24:24.174 "memory_domains": [ 00:24:24.174 { 00:24:24.174 "dma_device_id": "system", 00:24:24.174 "dma_device_type": 1 00:24:24.174 }, 00:24:24.174 { 00:24:24.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:24.174 "dma_device_type": 2 00:24:24.174 } 00:24:24.174 ], 00:24:24.174 "driver_specific": {} 00:24:24.174 } 00:24:24.174 ] 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.174 BaseBdev4 00:24:24.174 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.175 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:24:24.175 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:24:24.175 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:24.175 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:24.175 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:24.175 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:24.175 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:24.175 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.175 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.175 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.175 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:24.175 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.175 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.175 [ 00:24:24.175 { 00:24:24.175 "name": "BaseBdev4", 00:24:24.175 "aliases": [ 00:24:24.175 "adc95603-0c50-4dd4-80f0-032b6c0279bd" 00:24:24.175 ], 00:24:24.175 "product_name": "Malloc disk", 00:24:24.175 "block_size": 512, 00:24:24.175 "num_blocks": 65536, 00:24:24.175 "uuid": "adc95603-0c50-4dd4-80f0-032b6c0279bd", 00:24:24.175 "assigned_rate_limits": { 00:24:24.175 "rw_ios_per_sec": 0, 00:24:24.175 "rw_mbytes_per_sec": 0, 00:24:24.175 "r_mbytes_per_sec": 0, 00:24:24.175 "w_mbytes_per_sec": 0 00:24:24.175 }, 00:24:24.175 "claimed": false, 00:24:24.175 "zoned": false, 00:24:24.175 "supported_io_types": { 00:24:24.175 "read": true, 00:24:24.175 "write": true, 00:24:24.175 "unmap": true, 00:24:24.175 "flush": true, 00:24:24.175 "reset": true, 00:24:24.175 "nvme_admin": false, 00:24:24.175 "nvme_io": false, 00:24:24.175 "nvme_io_md": false, 00:24:24.175 "write_zeroes": true, 00:24:24.175 "zcopy": true, 00:24:24.175 "get_zone_info": false, 00:24:24.175 "zone_management": false, 00:24:24.175 "zone_append": false, 00:24:24.175 "compare": false, 00:24:24.175 "compare_and_write": false, 00:24:24.175 "abort": true, 00:24:24.175 "seek_hole": false, 00:24:24.175 "seek_data": false, 00:24:24.175 "copy": true, 00:24:24.175 "nvme_iov_md": false 00:24:24.175 }, 00:24:24.175 "memory_domains": [ 00:24:24.175 { 00:24:24.175 "dma_device_id": "system", 00:24:24.175 "dma_device_type": 1 00:24:24.175 }, 00:24:24.175 { 00:24:24.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:24.175 "dma_device_type": 2 00:24:24.175 } 00:24:24.175 ], 00:24:24.175 "driver_specific": {} 00:24:24.175 } 00:24:24.175 ] 00:24:24.175 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.175 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:24.175 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:24.175 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:24.175 13:44:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:24.175 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.175 13:44:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.175 [2024-11-20 13:44:26.998032] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:24.175 [2024-11-20 13:44:26.998234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:24.175 [2024-11-20 13:44:26.998388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:24.175 [2024-11-20 13:44:27.001013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:24.175 [2024-11-20 13:44:27.001212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:24.175 13:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.175 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:24.175 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:24.175 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:24.175 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:24.175 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:24.175 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:24.175 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:24.175 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:24.175 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:24.175 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:24.175 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.175 13:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.175 13:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.175 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:24.175 13:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.175 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:24.175 "name": "Existed_Raid", 00:24:24.175 "uuid": "d2e226e7-2e69-412b-b8b3-efdf2a44d576", 00:24:24.175 "strip_size_kb": 64, 00:24:24.175 "state": "configuring", 00:24:24.175 "raid_level": "raid0", 00:24:24.175 "superblock": true, 00:24:24.175 "num_base_bdevs": 4, 00:24:24.175 "num_base_bdevs_discovered": 3, 00:24:24.175 "num_base_bdevs_operational": 4, 00:24:24.175 "base_bdevs_list": [ 00:24:24.175 { 00:24:24.175 "name": "BaseBdev1", 00:24:24.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.175 "is_configured": false, 00:24:24.175 "data_offset": 0, 00:24:24.175 "data_size": 0 00:24:24.175 }, 00:24:24.175 { 00:24:24.175 "name": "BaseBdev2", 00:24:24.175 "uuid": "0747e568-c45b-4ca1-8777-7063f35a1ef0", 00:24:24.175 "is_configured": true, 00:24:24.175 "data_offset": 2048, 00:24:24.175 "data_size": 63488 00:24:24.175 }, 00:24:24.175 { 00:24:24.175 "name": "BaseBdev3", 00:24:24.175 "uuid": "a6da9f1a-1bed-44c7-86a9-fe6bf2135090", 00:24:24.175 "is_configured": true, 00:24:24.175 "data_offset": 2048, 00:24:24.175 "data_size": 63488 00:24:24.175 }, 00:24:24.175 { 00:24:24.175 "name": "BaseBdev4", 00:24:24.175 "uuid": "adc95603-0c50-4dd4-80f0-032b6c0279bd", 00:24:24.175 "is_configured": true, 00:24:24.175 "data_offset": 2048, 00:24:24.175 "data_size": 63488 00:24:24.175 } 00:24:24.175 ] 00:24:24.175 }' 00:24:24.175 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:24.175 13:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.743 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:24:24.743 13:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.743 13:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.743 [2024-11-20 13:44:27.606175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:24.743 13:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.743 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:24.743 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:24.743 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:24.743 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:24.743 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:24.743 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:24.743 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:24.743 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:24.743 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:24.743 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:24.743 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:24.743 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:24.743 13:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.743 13:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:24.743 13:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.002 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:25.002 "name": "Existed_Raid", 00:24:25.002 "uuid": "d2e226e7-2e69-412b-b8b3-efdf2a44d576", 00:24:25.002 "strip_size_kb": 64, 00:24:25.002 "state": "configuring", 00:24:25.002 "raid_level": "raid0", 00:24:25.002 "superblock": true, 00:24:25.002 "num_base_bdevs": 4, 00:24:25.002 "num_base_bdevs_discovered": 2, 00:24:25.002 "num_base_bdevs_operational": 4, 00:24:25.002 "base_bdevs_list": [ 00:24:25.002 { 00:24:25.002 "name": "BaseBdev1", 00:24:25.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.002 "is_configured": false, 00:24:25.002 "data_offset": 0, 00:24:25.002 "data_size": 0 00:24:25.002 }, 00:24:25.002 { 00:24:25.002 "name": null, 00:24:25.002 "uuid": "0747e568-c45b-4ca1-8777-7063f35a1ef0", 00:24:25.002 "is_configured": false, 00:24:25.002 "data_offset": 0, 00:24:25.002 "data_size": 63488 00:24:25.002 }, 00:24:25.002 { 00:24:25.002 "name": "BaseBdev3", 00:24:25.002 "uuid": "a6da9f1a-1bed-44c7-86a9-fe6bf2135090", 00:24:25.002 "is_configured": true, 00:24:25.002 "data_offset": 2048, 00:24:25.002 "data_size": 63488 00:24:25.002 }, 00:24:25.002 { 00:24:25.002 "name": "BaseBdev4", 00:24:25.002 "uuid": "adc95603-0c50-4dd4-80f0-032b6c0279bd", 00:24:25.002 "is_configured": true, 00:24:25.002 "data_offset": 2048, 00:24:25.002 "data_size": 63488 00:24:25.002 } 00:24:25.002 ] 00:24:25.002 }' 00:24:25.002 13:44:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:25.002 13:44:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.260 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.260 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:25.260 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.260 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.519 [2024-11-20 13:44:28.273111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:25.519 BaseBdev1 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.519 [ 00:24:25.519 { 00:24:25.519 "name": "BaseBdev1", 00:24:25.519 "aliases": [ 00:24:25.519 "49adc512-a1e0-4781-b548-6630090b9f69" 00:24:25.519 ], 00:24:25.519 "product_name": "Malloc disk", 00:24:25.519 "block_size": 512, 00:24:25.519 "num_blocks": 65536, 00:24:25.519 "uuid": "49adc512-a1e0-4781-b548-6630090b9f69", 00:24:25.519 "assigned_rate_limits": { 00:24:25.519 "rw_ios_per_sec": 0, 00:24:25.519 "rw_mbytes_per_sec": 0, 00:24:25.519 "r_mbytes_per_sec": 0, 00:24:25.519 "w_mbytes_per_sec": 0 00:24:25.519 }, 00:24:25.519 "claimed": true, 00:24:25.519 "claim_type": "exclusive_write", 00:24:25.519 "zoned": false, 00:24:25.519 "supported_io_types": { 00:24:25.519 "read": true, 00:24:25.519 "write": true, 00:24:25.519 "unmap": true, 00:24:25.519 "flush": true, 00:24:25.519 "reset": true, 00:24:25.519 "nvme_admin": false, 00:24:25.519 "nvme_io": false, 00:24:25.519 "nvme_io_md": false, 00:24:25.519 "write_zeroes": true, 00:24:25.519 "zcopy": true, 00:24:25.519 "get_zone_info": false, 00:24:25.519 "zone_management": false, 00:24:25.519 "zone_append": false, 00:24:25.519 "compare": false, 00:24:25.519 "compare_and_write": false, 00:24:25.519 "abort": true, 00:24:25.519 "seek_hole": false, 00:24:25.519 "seek_data": false, 00:24:25.519 "copy": true, 00:24:25.519 "nvme_iov_md": false 00:24:25.519 }, 00:24:25.519 "memory_domains": [ 00:24:25.519 { 00:24:25.519 "dma_device_id": "system", 00:24:25.519 "dma_device_type": 1 00:24:25.519 }, 00:24:25.519 { 00:24:25.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:25.519 "dma_device_type": 2 00:24:25.519 } 00:24:25.519 ], 00:24:25.519 "driver_specific": {} 00:24:25.519 } 00:24:25.519 ] 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:25.519 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:25.520 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:25.520 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:25.520 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.520 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.520 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.520 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.520 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:25.520 "name": "Existed_Raid", 00:24:25.520 "uuid": "d2e226e7-2e69-412b-b8b3-efdf2a44d576", 00:24:25.520 "strip_size_kb": 64, 00:24:25.520 "state": "configuring", 00:24:25.520 "raid_level": "raid0", 00:24:25.520 "superblock": true, 00:24:25.520 "num_base_bdevs": 4, 00:24:25.520 "num_base_bdevs_discovered": 3, 00:24:25.520 "num_base_bdevs_operational": 4, 00:24:25.520 "base_bdevs_list": [ 00:24:25.520 { 00:24:25.520 "name": "BaseBdev1", 00:24:25.520 "uuid": "49adc512-a1e0-4781-b548-6630090b9f69", 00:24:25.520 "is_configured": true, 00:24:25.520 "data_offset": 2048, 00:24:25.520 "data_size": 63488 00:24:25.520 }, 00:24:25.520 { 00:24:25.520 "name": null, 00:24:25.520 "uuid": "0747e568-c45b-4ca1-8777-7063f35a1ef0", 00:24:25.520 "is_configured": false, 00:24:25.520 "data_offset": 0, 00:24:25.520 "data_size": 63488 00:24:25.520 }, 00:24:25.520 { 00:24:25.520 "name": "BaseBdev3", 00:24:25.520 "uuid": "a6da9f1a-1bed-44c7-86a9-fe6bf2135090", 00:24:25.520 "is_configured": true, 00:24:25.520 "data_offset": 2048, 00:24:25.520 "data_size": 63488 00:24:25.520 }, 00:24:25.520 { 00:24:25.520 "name": "BaseBdev4", 00:24:25.520 "uuid": "adc95603-0c50-4dd4-80f0-032b6c0279bd", 00:24:25.520 "is_configured": true, 00:24:25.520 "data_offset": 2048, 00:24:25.520 "data_size": 63488 00:24:25.520 } 00:24:25.520 ] 00:24:25.520 }' 00:24:25.520 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:25.520 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.088 [2024-11-20 13:44:28.933420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:26.088 "name": "Existed_Raid", 00:24:26.088 "uuid": "d2e226e7-2e69-412b-b8b3-efdf2a44d576", 00:24:26.088 "strip_size_kb": 64, 00:24:26.088 "state": "configuring", 00:24:26.088 "raid_level": "raid0", 00:24:26.088 "superblock": true, 00:24:26.088 "num_base_bdevs": 4, 00:24:26.088 "num_base_bdevs_discovered": 2, 00:24:26.088 "num_base_bdevs_operational": 4, 00:24:26.088 "base_bdevs_list": [ 00:24:26.088 { 00:24:26.088 "name": "BaseBdev1", 00:24:26.088 "uuid": "49adc512-a1e0-4781-b548-6630090b9f69", 00:24:26.088 "is_configured": true, 00:24:26.088 "data_offset": 2048, 00:24:26.088 "data_size": 63488 00:24:26.088 }, 00:24:26.088 { 00:24:26.088 "name": null, 00:24:26.088 "uuid": "0747e568-c45b-4ca1-8777-7063f35a1ef0", 00:24:26.088 "is_configured": false, 00:24:26.088 "data_offset": 0, 00:24:26.088 "data_size": 63488 00:24:26.088 }, 00:24:26.088 { 00:24:26.088 "name": null, 00:24:26.088 "uuid": "a6da9f1a-1bed-44c7-86a9-fe6bf2135090", 00:24:26.088 "is_configured": false, 00:24:26.088 "data_offset": 0, 00:24:26.088 "data_size": 63488 00:24:26.088 }, 00:24:26.088 { 00:24:26.088 "name": "BaseBdev4", 00:24:26.088 "uuid": "adc95603-0c50-4dd4-80f0-032b6c0279bd", 00:24:26.088 "is_configured": true, 00:24:26.088 "data_offset": 2048, 00:24:26.088 "data_size": 63488 00:24:26.088 } 00:24:26.088 ] 00:24:26.088 }' 00:24:26.088 13:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:26.088 13:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.655 [2024-11-20 13:44:29.557612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.655 13:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.914 13:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.914 13:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:26.914 "name": "Existed_Raid", 00:24:26.914 "uuid": "d2e226e7-2e69-412b-b8b3-efdf2a44d576", 00:24:26.914 "strip_size_kb": 64, 00:24:26.914 "state": "configuring", 00:24:26.914 "raid_level": "raid0", 00:24:26.914 "superblock": true, 00:24:26.914 "num_base_bdevs": 4, 00:24:26.914 "num_base_bdevs_discovered": 3, 00:24:26.914 "num_base_bdevs_operational": 4, 00:24:26.914 "base_bdevs_list": [ 00:24:26.914 { 00:24:26.914 "name": "BaseBdev1", 00:24:26.914 "uuid": "49adc512-a1e0-4781-b548-6630090b9f69", 00:24:26.914 "is_configured": true, 00:24:26.914 "data_offset": 2048, 00:24:26.914 "data_size": 63488 00:24:26.914 }, 00:24:26.914 { 00:24:26.914 "name": null, 00:24:26.914 "uuid": "0747e568-c45b-4ca1-8777-7063f35a1ef0", 00:24:26.914 "is_configured": false, 00:24:26.914 "data_offset": 0, 00:24:26.914 "data_size": 63488 00:24:26.914 }, 00:24:26.914 { 00:24:26.914 "name": "BaseBdev3", 00:24:26.914 "uuid": "a6da9f1a-1bed-44c7-86a9-fe6bf2135090", 00:24:26.914 "is_configured": true, 00:24:26.914 "data_offset": 2048, 00:24:26.914 "data_size": 63488 00:24:26.914 }, 00:24:26.914 { 00:24:26.914 "name": "BaseBdev4", 00:24:26.914 "uuid": "adc95603-0c50-4dd4-80f0-032b6c0279bd", 00:24:26.914 "is_configured": true, 00:24:26.914 "data_offset": 2048, 00:24:26.914 "data_size": 63488 00:24:26.914 } 00:24:26.914 ] 00:24:26.914 }' 00:24:26.914 13:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:26.914 13:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.173 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:27.173 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.173 13:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.173 13:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.173 13:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.173 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:24:27.173 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:27.173 13:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.173 13:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.173 [2024-11-20 13:44:30.069711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:27.431 13:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.431 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:27.431 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:27.431 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:27.431 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:27.431 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:27.431 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:27.431 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:27.431 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:27.431 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:27.432 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:27.432 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.432 13:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.432 13:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.432 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:27.432 13:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.432 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:27.432 "name": "Existed_Raid", 00:24:27.432 "uuid": "d2e226e7-2e69-412b-b8b3-efdf2a44d576", 00:24:27.432 "strip_size_kb": 64, 00:24:27.432 "state": "configuring", 00:24:27.432 "raid_level": "raid0", 00:24:27.432 "superblock": true, 00:24:27.432 "num_base_bdevs": 4, 00:24:27.432 "num_base_bdevs_discovered": 2, 00:24:27.432 "num_base_bdevs_operational": 4, 00:24:27.432 "base_bdevs_list": [ 00:24:27.432 { 00:24:27.432 "name": null, 00:24:27.432 "uuid": "49adc512-a1e0-4781-b548-6630090b9f69", 00:24:27.432 "is_configured": false, 00:24:27.432 "data_offset": 0, 00:24:27.432 "data_size": 63488 00:24:27.432 }, 00:24:27.432 { 00:24:27.432 "name": null, 00:24:27.432 "uuid": "0747e568-c45b-4ca1-8777-7063f35a1ef0", 00:24:27.432 "is_configured": false, 00:24:27.432 "data_offset": 0, 00:24:27.432 "data_size": 63488 00:24:27.432 }, 00:24:27.432 { 00:24:27.432 "name": "BaseBdev3", 00:24:27.432 "uuid": "a6da9f1a-1bed-44c7-86a9-fe6bf2135090", 00:24:27.432 "is_configured": true, 00:24:27.432 "data_offset": 2048, 00:24:27.432 "data_size": 63488 00:24:27.432 }, 00:24:27.432 { 00:24:27.432 "name": "BaseBdev4", 00:24:27.432 "uuid": "adc95603-0c50-4dd4-80f0-032b6c0279bd", 00:24:27.432 "is_configured": true, 00:24:27.432 "data_offset": 2048, 00:24:27.432 "data_size": 63488 00:24:27.432 } 00:24:27.432 ] 00:24:27.432 }' 00:24:27.432 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:27.432 13:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.032 [2024-11-20 13:44:30.738888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:28.032 "name": "Existed_Raid", 00:24:28.032 "uuid": "d2e226e7-2e69-412b-b8b3-efdf2a44d576", 00:24:28.032 "strip_size_kb": 64, 00:24:28.032 "state": "configuring", 00:24:28.032 "raid_level": "raid0", 00:24:28.032 "superblock": true, 00:24:28.032 "num_base_bdevs": 4, 00:24:28.032 "num_base_bdevs_discovered": 3, 00:24:28.032 "num_base_bdevs_operational": 4, 00:24:28.032 "base_bdevs_list": [ 00:24:28.032 { 00:24:28.032 "name": null, 00:24:28.032 "uuid": "49adc512-a1e0-4781-b548-6630090b9f69", 00:24:28.032 "is_configured": false, 00:24:28.032 "data_offset": 0, 00:24:28.032 "data_size": 63488 00:24:28.032 }, 00:24:28.032 { 00:24:28.032 "name": "BaseBdev2", 00:24:28.032 "uuid": "0747e568-c45b-4ca1-8777-7063f35a1ef0", 00:24:28.032 "is_configured": true, 00:24:28.032 "data_offset": 2048, 00:24:28.032 "data_size": 63488 00:24:28.032 }, 00:24:28.032 { 00:24:28.032 "name": "BaseBdev3", 00:24:28.032 "uuid": "a6da9f1a-1bed-44c7-86a9-fe6bf2135090", 00:24:28.032 "is_configured": true, 00:24:28.032 "data_offset": 2048, 00:24:28.032 "data_size": 63488 00:24:28.032 }, 00:24:28.032 { 00:24:28.032 "name": "BaseBdev4", 00:24:28.032 "uuid": "adc95603-0c50-4dd4-80f0-032b6c0279bd", 00:24:28.032 "is_configured": true, 00:24:28.032 "data_offset": 2048, 00:24:28.032 "data_size": 63488 00:24:28.032 } 00:24:28.032 ] 00:24:28.032 }' 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:28.032 13:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.599 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:28.599 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:28.599 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 49adc512-a1e0-4781-b548-6630090b9f69 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.600 [2024-11-20 13:44:31.369020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:28.600 [2024-11-20 13:44:31.369322] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:28.600 [2024-11-20 13:44:31.369340] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:28.600 NewBaseBdev 00:24:28.600 [2024-11-20 13:44:31.369660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:24:28.600 [2024-11-20 13:44:31.369833] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:28.600 [2024-11-20 13:44:31.369854] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:24:28.600 [2024-11-20 13:44:31.370030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.600 [ 00:24:28.600 { 00:24:28.600 "name": "NewBaseBdev", 00:24:28.600 "aliases": [ 00:24:28.600 "49adc512-a1e0-4781-b548-6630090b9f69" 00:24:28.600 ], 00:24:28.600 "product_name": "Malloc disk", 00:24:28.600 "block_size": 512, 00:24:28.600 "num_blocks": 65536, 00:24:28.600 "uuid": "49adc512-a1e0-4781-b548-6630090b9f69", 00:24:28.600 "assigned_rate_limits": { 00:24:28.600 "rw_ios_per_sec": 0, 00:24:28.600 "rw_mbytes_per_sec": 0, 00:24:28.600 "r_mbytes_per_sec": 0, 00:24:28.600 "w_mbytes_per_sec": 0 00:24:28.600 }, 00:24:28.600 "claimed": true, 00:24:28.600 "claim_type": "exclusive_write", 00:24:28.600 "zoned": false, 00:24:28.600 "supported_io_types": { 00:24:28.600 "read": true, 00:24:28.600 "write": true, 00:24:28.600 "unmap": true, 00:24:28.600 "flush": true, 00:24:28.600 "reset": true, 00:24:28.600 "nvme_admin": false, 00:24:28.600 "nvme_io": false, 00:24:28.600 "nvme_io_md": false, 00:24:28.600 "write_zeroes": true, 00:24:28.600 "zcopy": true, 00:24:28.600 "get_zone_info": false, 00:24:28.600 "zone_management": false, 00:24:28.600 "zone_append": false, 00:24:28.600 "compare": false, 00:24:28.600 "compare_and_write": false, 00:24:28.600 "abort": true, 00:24:28.600 "seek_hole": false, 00:24:28.600 "seek_data": false, 00:24:28.600 "copy": true, 00:24:28.600 "nvme_iov_md": false 00:24:28.600 }, 00:24:28.600 "memory_domains": [ 00:24:28.600 { 00:24:28.600 "dma_device_id": "system", 00:24:28.600 "dma_device_type": 1 00:24:28.600 }, 00:24:28.600 { 00:24:28.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.600 "dma_device_type": 2 00:24:28.600 } 00:24:28.600 ], 00:24:28.600 "driver_specific": {} 00:24:28.600 } 00:24:28.600 ] 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:28.600 "name": "Existed_Raid", 00:24:28.600 "uuid": "d2e226e7-2e69-412b-b8b3-efdf2a44d576", 00:24:28.600 "strip_size_kb": 64, 00:24:28.600 "state": "online", 00:24:28.600 "raid_level": "raid0", 00:24:28.600 "superblock": true, 00:24:28.600 "num_base_bdevs": 4, 00:24:28.600 "num_base_bdevs_discovered": 4, 00:24:28.600 "num_base_bdevs_operational": 4, 00:24:28.600 "base_bdevs_list": [ 00:24:28.600 { 00:24:28.600 "name": "NewBaseBdev", 00:24:28.600 "uuid": "49adc512-a1e0-4781-b548-6630090b9f69", 00:24:28.600 "is_configured": true, 00:24:28.600 "data_offset": 2048, 00:24:28.600 "data_size": 63488 00:24:28.600 }, 00:24:28.600 { 00:24:28.600 "name": "BaseBdev2", 00:24:28.600 "uuid": "0747e568-c45b-4ca1-8777-7063f35a1ef0", 00:24:28.600 "is_configured": true, 00:24:28.600 "data_offset": 2048, 00:24:28.600 "data_size": 63488 00:24:28.600 }, 00:24:28.600 { 00:24:28.600 "name": "BaseBdev3", 00:24:28.600 "uuid": "a6da9f1a-1bed-44c7-86a9-fe6bf2135090", 00:24:28.600 "is_configured": true, 00:24:28.600 "data_offset": 2048, 00:24:28.600 "data_size": 63488 00:24:28.600 }, 00:24:28.600 { 00:24:28.600 "name": "BaseBdev4", 00:24:28.600 "uuid": "adc95603-0c50-4dd4-80f0-032b6c0279bd", 00:24:28.600 "is_configured": true, 00:24:28.600 "data_offset": 2048, 00:24:28.600 "data_size": 63488 00:24:28.600 } 00:24:28.600 ] 00:24:28.600 }' 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:28.600 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.168 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:24:29.168 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:29.168 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:29.168 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:29.168 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:24:29.168 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:29.168 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:29.168 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.168 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:29.168 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.168 [2024-11-20 13:44:31.917716] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:29.168 13:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.168 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:29.168 "name": "Existed_Raid", 00:24:29.168 "aliases": [ 00:24:29.168 "d2e226e7-2e69-412b-b8b3-efdf2a44d576" 00:24:29.168 ], 00:24:29.168 "product_name": "Raid Volume", 00:24:29.168 "block_size": 512, 00:24:29.168 "num_blocks": 253952, 00:24:29.168 "uuid": "d2e226e7-2e69-412b-b8b3-efdf2a44d576", 00:24:29.168 "assigned_rate_limits": { 00:24:29.168 "rw_ios_per_sec": 0, 00:24:29.168 "rw_mbytes_per_sec": 0, 00:24:29.168 "r_mbytes_per_sec": 0, 00:24:29.168 "w_mbytes_per_sec": 0 00:24:29.168 }, 00:24:29.168 "claimed": false, 00:24:29.168 "zoned": false, 00:24:29.168 "supported_io_types": { 00:24:29.168 "read": true, 00:24:29.168 "write": true, 00:24:29.168 "unmap": true, 00:24:29.168 "flush": true, 00:24:29.168 "reset": true, 00:24:29.168 "nvme_admin": false, 00:24:29.168 "nvme_io": false, 00:24:29.168 "nvme_io_md": false, 00:24:29.168 "write_zeroes": true, 00:24:29.168 "zcopy": false, 00:24:29.168 "get_zone_info": false, 00:24:29.168 "zone_management": false, 00:24:29.168 "zone_append": false, 00:24:29.168 "compare": false, 00:24:29.168 "compare_and_write": false, 00:24:29.168 "abort": false, 00:24:29.168 "seek_hole": false, 00:24:29.168 "seek_data": false, 00:24:29.168 "copy": false, 00:24:29.168 "nvme_iov_md": false 00:24:29.168 }, 00:24:29.168 "memory_domains": [ 00:24:29.168 { 00:24:29.168 "dma_device_id": "system", 00:24:29.168 "dma_device_type": 1 00:24:29.168 }, 00:24:29.168 { 00:24:29.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.168 "dma_device_type": 2 00:24:29.168 }, 00:24:29.168 { 00:24:29.168 "dma_device_id": "system", 00:24:29.168 "dma_device_type": 1 00:24:29.168 }, 00:24:29.168 { 00:24:29.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.168 "dma_device_type": 2 00:24:29.168 }, 00:24:29.168 { 00:24:29.168 "dma_device_id": "system", 00:24:29.168 "dma_device_type": 1 00:24:29.168 }, 00:24:29.168 { 00:24:29.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.168 "dma_device_type": 2 00:24:29.168 }, 00:24:29.168 { 00:24:29.168 "dma_device_id": "system", 00:24:29.168 "dma_device_type": 1 00:24:29.168 }, 00:24:29.168 { 00:24:29.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.168 "dma_device_type": 2 00:24:29.168 } 00:24:29.168 ], 00:24:29.168 "driver_specific": { 00:24:29.168 "raid": { 00:24:29.168 "uuid": "d2e226e7-2e69-412b-b8b3-efdf2a44d576", 00:24:29.168 "strip_size_kb": 64, 00:24:29.168 "state": "online", 00:24:29.169 "raid_level": "raid0", 00:24:29.169 "superblock": true, 00:24:29.169 "num_base_bdevs": 4, 00:24:29.169 "num_base_bdevs_discovered": 4, 00:24:29.169 "num_base_bdevs_operational": 4, 00:24:29.169 "base_bdevs_list": [ 00:24:29.169 { 00:24:29.169 "name": "NewBaseBdev", 00:24:29.169 "uuid": "49adc512-a1e0-4781-b548-6630090b9f69", 00:24:29.169 "is_configured": true, 00:24:29.169 "data_offset": 2048, 00:24:29.169 "data_size": 63488 00:24:29.169 }, 00:24:29.169 { 00:24:29.169 "name": "BaseBdev2", 00:24:29.169 "uuid": "0747e568-c45b-4ca1-8777-7063f35a1ef0", 00:24:29.169 "is_configured": true, 00:24:29.169 "data_offset": 2048, 00:24:29.169 "data_size": 63488 00:24:29.169 }, 00:24:29.169 { 00:24:29.169 "name": "BaseBdev3", 00:24:29.169 "uuid": "a6da9f1a-1bed-44c7-86a9-fe6bf2135090", 00:24:29.169 "is_configured": true, 00:24:29.169 "data_offset": 2048, 00:24:29.169 "data_size": 63488 00:24:29.169 }, 00:24:29.169 { 00:24:29.169 "name": "BaseBdev4", 00:24:29.169 "uuid": "adc95603-0c50-4dd4-80f0-032b6c0279bd", 00:24:29.169 "is_configured": true, 00:24:29.169 "data_offset": 2048, 00:24:29.169 "data_size": 63488 00:24:29.169 } 00:24:29.169 ] 00:24:29.169 } 00:24:29.169 } 00:24:29.169 }' 00:24:29.169 13:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:29.169 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:24:29.169 BaseBdev2 00:24:29.169 BaseBdev3 00:24:29.169 BaseBdev4' 00:24:29.169 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:29.169 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:29.169 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:29.169 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:24:29.169 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.169 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.169 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:29.428 [2024-11-20 13:44:32.281361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:29.428 [2024-11-20 13:44:32.281543] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:29.428 [2024-11-20 13:44:32.281669] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:29.428 [2024-11-20 13:44:32.281762] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:29.428 [2024-11-20 13:44:32.281779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70330 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70330 ']' 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70330 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70330 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70330' 00:24:29.428 killing process with pid 70330 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70330 00:24:29.428 [2024-11-20 13:44:32.326849] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:29.428 13:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70330 00:24:29.996 [2024-11-20 13:44:32.720522] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:30.932 13:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:24:30.932 00:24:30.932 real 0m12.945s 00:24:30.932 user 0m21.360s 00:24:30.932 sys 0m1.839s 00:24:30.932 ************************************ 00:24:30.932 END TEST raid_state_function_test_sb 00:24:30.932 ************************************ 00:24:30.932 13:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:30.932 13:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.932 13:44:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:24:30.932 13:44:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:30.932 13:44:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:30.932 13:44:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:30.932 ************************************ 00:24:30.932 START TEST raid_superblock_test 00:24:30.932 ************************************ 00:24:30.932 13:44:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:24:30.932 13:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:24:30.932 13:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:24:30.932 13:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:24:30.932 13:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:24:30.932 13:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:24:30.932 13:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:24:30.932 13:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:24:30.932 13:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:24:30.932 13:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:24:30.932 13:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:24:30.932 13:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:24:30.932 13:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:24:30.932 13:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:24:30.933 13:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:24:30.933 13:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:24:30.933 13:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:24:30.933 13:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71015 00:24:30.933 13:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:24:30.933 13:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71015 00:24:30.933 13:44:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71015 ']' 00:24:30.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.933 13:44:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.933 13:44:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:30.933 13:44:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.933 13:44:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:30.933 13:44:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:31.240 [2024-11-20 13:44:33.942678] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:24:31.240 [2024-11-20 13:44:33.942887] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71015 ] 00:24:31.240 [2024-11-20 13:44:34.142260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.501 [2024-11-20 13:44:34.299944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.769 [2024-11-20 13:44:34.518964] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:31.769 [2024-11-20 13:44:34.519010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.075 malloc1 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.075 [2024-11-20 13:44:34.935612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:32.075 [2024-11-20 13:44:34.935684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:32.075 [2024-11-20 13:44:34.935716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:32.075 [2024-11-20 13:44:34.935731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:32.075 [2024-11-20 13:44:34.938644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:32.075 [2024-11-20 13:44:34.938688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:32.075 pt1 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.075 malloc2 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.075 13:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.335 [2024-11-20 13:44:34.992101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:32.335 [2024-11-20 13:44:34.992180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:32.335 [2024-11-20 13:44:34.992217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:32.335 [2024-11-20 13:44:34.992232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:32.335 [2024-11-20 13:44:34.995147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:32.335 [2024-11-20 13:44:34.995193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:32.335 pt2 00:24:32.335 13:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.335 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:32.335 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:32.335 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:24:32.335 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:24:32.335 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:32.335 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:32.335 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:32.335 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:32.335 13:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:24:32.335 13:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.335 13:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.335 malloc3 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.335 [2024-11-20 13:44:35.054348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:32.335 [2024-11-20 13:44:35.054550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:32.335 [2024-11-20 13:44:35.054595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:32.335 [2024-11-20 13:44:35.054611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:32.335 [2024-11-20 13:44:35.057509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:32.335 [2024-11-20 13:44:35.057555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:32.335 pt3 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.335 malloc4 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.335 [2024-11-20 13:44:35.106540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:32.335 [2024-11-20 13:44:35.106614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:32.335 [2024-11-20 13:44:35.106644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:24:32.335 [2024-11-20 13:44:35.106658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:32.335 [2024-11-20 13:44:35.109427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:32.335 [2024-11-20 13:44:35.109596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:32.335 pt4 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.335 [2024-11-20 13:44:35.114563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:32.335 [2024-11-20 13:44:35.117006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:32.335 [2024-11-20 13:44:35.117136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:32.335 [2024-11-20 13:44:35.117209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:32.335 [2024-11-20 13:44:35.117453] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:32.335 [2024-11-20 13:44:35.117471] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:32.335 [2024-11-20 13:44:35.117784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:32.335 [2024-11-20 13:44:35.118037] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:32.335 [2024-11-20 13:44:35.118059] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:32.335 [2024-11-20 13:44:35.118259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.335 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:32.335 "name": "raid_bdev1", 00:24:32.335 "uuid": "a61a6370-56be-4153-9ce5-943087b47b33", 00:24:32.335 "strip_size_kb": 64, 00:24:32.336 "state": "online", 00:24:32.336 "raid_level": "raid0", 00:24:32.336 "superblock": true, 00:24:32.336 "num_base_bdevs": 4, 00:24:32.336 "num_base_bdevs_discovered": 4, 00:24:32.336 "num_base_bdevs_operational": 4, 00:24:32.336 "base_bdevs_list": [ 00:24:32.336 { 00:24:32.336 "name": "pt1", 00:24:32.336 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:32.336 "is_configured": true, 00:24:32.336 "data_offset": 2048, 00:24:32.336 "data_size": 63488 00:24:32.336 }, 00:24:32.336 { 00:24:32.336 "name": "pt2", 00:24:32.336 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:32.336 "is_configured": true, 00:24:32.336 "data_offset": 2048, 00:24:32.336 "data_size": 63488 00:24:32.336 }, 00:24:32.336 { 00:24:32.336 "name": "pt3", 00:24:32.336 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:32.336 "is_configured": true, 00:24:32.336 "data_offset": 2048, 00:24:32.336 "data_size": 63488 00:24:32.336 }, 00:24:32.336 { 00:24:32.336 "name": "pt4", 00:24:32.336 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:32.336 "is_configured": true, 00:24:32.336 "data_offset": 2048, 00:24:32.336 "data_size": 63488 00:24:32.336 } 00:24:32.336 ] 00:24:32.336 }' 00:24:32.336 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:32.336 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.904 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:24:32.904 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:32.904 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:32.904 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:32.904 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:32.904 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:32.904 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:32.904 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.904 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:32.904 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.904 [2024-11-20 13:44:35.647144] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:32.904 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.904 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:32.904 "name": "raid_bdev1", 00:24:32.904 "aliases": [ 00:24:32.904 "a61a6370-56be-4153-9ce5-943087b47b33" 00:24:32.904 ], 00:24:32.904 "product_name": "Raid Volume", 00:24:32.904 "block_size": 512, 00:24:32.904 "num_blocks": 253952, 00:24:32.904 "uuid": "a61a6370-56be-4153-9ce5-943087b47b33", 00:24:32.904 "assigned_rate_limits": { 00:24:32.904 "rw_ios_per_sec": 0, 00:24:32.904 "rw_mbytes_per_sec": 0, 00:24:32.904 "r_mbytes_per_sec": 0, 00:24:32.904 "w_mbytes_per_sec": 0 00:24:32.904 }, 00:24:32.904 "claimed": false, 00:24:32.904 "zoned": false, 00:24:32.904 "supported_io_types": { 00:24:32.904 "read": true, 00:24:32.904 "write": true, 00:24:32.904 "unmap": true, 00:24:32.904 "flush": true, 00:24:32.904 "reset": true, 00:24:32.904 "nvme_admin": false, 00:24:32.904 "nvme_io": false, 00:24:32.904 "nvme_io_md": false, 00:24:32.904 "write_zeroes": true, 00:24:32.904 "zcopy": false, 00:24:32.904 "get_zone_info": false, 00:24:32.904 "zone_management": false, 00:24:32.904 "zone_append": false, 00:24:32.904 "compare": false, 00:24:32.904 "compare_and_write": false, 00:24:32.904 "abort": false, 00:24:32.904 "seek_hole": false, 00:24:32.904 "seek_data": false, 00:24:32.904 "copy": false, 00:24:32.904 "nvme_iov_md": false 00:24:32.904 }, 00:24:32.904 "memory_domains": [ 00:24:32.904 { 00:24:32.904 "dma_device_id": "system", 00:24:32.904 "dma_device_type": 1 00:24:32.904 }, 00:24:32.904 { 00:24:32.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:32.904 "dma_device_type": 2 00:24:32.904 }, 00:24:32.904 { 00:24:32.904 "dma_device_id": "system", 00:24:32.904 "dma_device_type": 1 00:24:32.904 }, 00:24:32.904 { 00:24:32.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:32.904 "dma_device_type": 2 00:24:32.904 }, 00:24:32.904 { 00:24:32.904 "dma_device_id": "system", 00:24:32.904 "dma_device_type": 1 00:24:32.904 }, 00:24:32.904 { 00:24:32.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:32.904 "dma_device_type": 2 00:24:32.904 }, 00:24:32.904 { 00:24:32.904 "dma_device_id": "system", 00:24:32.904 "dma_device_type": 1 00:24:32.904 }, 00:24:32.904 { 00:24:32.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:32.904 "dma_device_type": 2 00:24:32.904 } 00:24:32.904 ], 00:24:32.904 "driver_specific": { 00:24:32.904 "raid": { 00:24:32.904 "uuid": "a61a6370-56be-4153-9ce5-943087b47b33", 00:24:32.904 "strip_size_kb": 64, 00:24:32.904 "state": "online", 00:24:32.904 "raid_level": "raid0", 00:24:32.904 "superblock": true, 00:24:32.904 "num_base_bdevs": 4, 00:24:32.904 "num_base_bdevs_discovered": 4, 00:24:32.904 "num_base_bdevs_operational": 4, 00:24:32.904 "base_bdevs_list": [ 00:24:32.904 { 00:24:32.904 "name": "pt1", 00:24:32.904 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:32.904 "is_configured": true, 00:24:32.904 "data_offset": 2048, 00:24:32.904 "data_size": 63488 00:24:32.904 }, 00:24:32.904 { 00:24:32.904 "name": "pt2", 00:24:32.904 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:32.904 "is_configured": true, 00:24:32.904 "data_offset": 2048, 00:24:32.904 "data_size": 63488 00:24:32.904 }, 00:24:32.904 { 00:24:32.904 "name": "pt3", 00:24:32.904 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:32.904 "is_configured": true, 00:24:32.904 "data_offset": 2048, 00:24:32.904 "data_size": 63488 00:24:32.904 }, 00:24:32.904 { 00:24:32.904 "name": "pt4", 00:24:32.904 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:32.904 "is_configured": true, 00:24:32.904 "data_offset": 2048, 00:24:32.904 "data_size": 63488 00:24:32.904 } 00:24:32.904 ] 00:24:32.904 } 00:24:32.904 } 00:24:32.904 }' 00:24:32.904 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:32.904 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:32.904 pt2 00:24:32.904 pt3 00:24:32.904 pt4' 00:24:32.904 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:32.904 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:32.904 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:32.904 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:32.904 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.904 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.904 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:32.904 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.163 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:33.163 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:33.163 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:33.163 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:33.163 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.163 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.163 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:33.163 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.164 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:33.164 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:33.164 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:33.164 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:24:33.164 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.164 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:33.164 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.164 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.164 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:33.164 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:33.164 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:33.164 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:33.164 13:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:24:33.164 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.164 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.164 13:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.164 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:33.164 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:33.164 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:24:33.164 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:33.164 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.164 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.164 [2024-11-20 13:44:36.031193] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:33.164 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a61a6370-56be-4153-9ce5-943087b47b33 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a61a6370-56be-4153-9ce5-943087b47b33 ']' 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.423 [2024-11-20 13:44:36.090813] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:33.423 [2024-11-20 13:44:36.090992] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:33.423 [2024-11-20 13:44:36.091143] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:33.423 [2024-11-20 13:44:36.091238] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:33.423 [2024-11-20 13:44:36.091261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.423 [2024-11-20 13:44:36.270933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:33.423 [2024-11-20 13:44:36.273426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:33.423 [2024-11-20 13:44:36.273495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:33.423 [2024-11-20 13:44:36.273550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:24:33.423 [2024-11-20 13:44:36.273624] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:33.423 [2024-11-20 13:44:36.273695] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:33.423 [2024-11-20 13:44:36.273729] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:24:33.423 [2024-11-20 13:44:36.273759] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:24:33.423 [2024-11-20 13:44:36.273781] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:33.423 [2024-11-20 13:44:36.273799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:24:33.423 request: 00:24:33.423 { 00:24:33.423 "name": "raid_bdev1", 00:24:33.423 "raid_level": "raid0", 00:24:33.423 "base_bdevs": [ 00:24:33.423 "malloc1", 00:24:33.423 "malloc2", 00:24:33.423 "malloc3", 00:24:33.423 "malloc4" 00:24:33.423 ], 00:24:33.423 "strip_size_kb": 64, 00:24:33.423 "superblock": false, 00:24:33.423 "method": "bdev_raid_create", 00:24:33.423 "req_id": 1 00:24:33.423 } 00:24:33.423 Got JSON-RPC error response 00:24:33.423 response: 00:24:33.423 { 00:24:33.423 "code": -17, 00:24:33.423 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:33.423 } 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:24:33.423 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.683 [2024-11-20 13:44:36.346887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:33.683 [2024-11-20 13:44:36.347127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:33.683 [2024-11-20 13:44:36.347203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:33.683 [2024-11-20 13:44:36.347457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:33.683 [2024-11-20 13:44:36.350535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:33.683 [2024-11-20 13:44:36.350704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:33.683 [2024-11-20 13:44:36.350963] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:33.683 [2024-11-20 13:44:36.351180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:33.683 pt1 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:33.683 "name": "raid_bdev1", 00:24:33.683 "uuid": "a61a6370-56be-4153-9ce5-943087b47b33", 00:24:33.683 "strip_size_kb": 64, 00:24:33.683 "state": "configuring", 00:24:33.683 "raid_level": "raid0", 00:24:33.683 "superblock": true, 00:24:33.683 "num_base_bdevs": 4, 00:24:33.683 "num_base_bdevs_discovered": 1, 00:24:33.683 "num_base_bdevs_operational": 4, 00:24:33.683 "base_bdevs_list": [ 00:24:33.683 { 00:24:33.683 "name": "pt1", 00:24:33.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:33.683 "is_configured": true, 00:24:33.683 "data_offset": 2048, 00:24:33.683 "data_size": 63488 00:24:33.683 }, 00:24:33.683 { 00:24:33.683 "name": null, 00:24:33.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:33.683 "is_configured": false, 00:24:33.683 "data_offset": 2048, 00:24:33.683 "data_size": 63488 00:24:33.683 }, 00:24:33.683 { 00:24:33.683 "name": null, 00:24:33.683 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:33.683 "is_configured": false, 00:24:33.683 "data_offset": 2048, 00:24:33.683 "data_size": 63488 00:24:33.683 }, 00:24:33.683 { 00:24:33.683 "name": null, 00:24:33.683 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:33.683 "is_configured": false, 00:24:33.683 "data_offset": 2048, 00:24:33.683 "data_size": 63488 00:24:33.683 } 00:24:33.683 ] 00:24:33.683 }' 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:33.683 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.252 [2024-11-20 13:44:36.879303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:34.252 [2024-11-20 13:44:36.879399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:34.252 [2024-11-20 13:44:36.879458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:34.252 [2024-11-20 13:44:36.879480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:34.252 [2024-11-20 13:44:36.880084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:34.252 [2024-11-20 13:44:36.880123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:34.252 [2024-11-20 13:44:36.880228] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:34.252 [2024-11-20 13:44:36.880264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:34.252 pt2 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.252 [2024-11-20 13:44:36.887283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:34.252 "name": "raid_bdev1", 00:24:34.252 "uuid": "a61a6370-56be-4153-9ce5-943087b47b33", 00:24:34.252 "strip_size_kb": 64, 00:24:34.252 "state": "configuring", 00:24:34.252 "raid_level": "raid0", 00:24:34.252 "superblock": true, 00:24:34.252 "num_base_bdevs": 4, 00:24:34.252 "num_base_bdevs_discovered": 1, 00:24:34.252 "num_base_bdevs_operational": 4, 00:24:34.252 "base_bdevs_list": [ 00:24:34.252 { 00:24:34.252 "name": "pt1", 00:24:34.252 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:34.252 "is_configured": true, 00:24:34.252 "data_offset": 2048, 00:24:34.252 "data_size": 63488 00:24:34.252 }, 00:24:34.252 { 00:24:34.252 "name": null, 00:24:34.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:34.252 "is_configured": false, 00:24:34.252 "data_offset": 0, 00:24:34.252 "data_size": 63488 00:24:34.252 }, 00:24:34.252 { 00:24:34.252 "name": null, 00:24:34.252 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:34.252 "is_configured": false, 00:24:34.252 "data_offset": 2048, 00:24:34.252 "data_size": 63488 00:24:34.252 }, 00:24:34.252 { 00:24:34.252 "name": null, 00:24:34.252 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:34.252 "is_configured": false, 00:24:34.252 "data_offset": 2048, 00:24:34.252 "data_size": 63488 00:24:34.252 } 00:24:34.252 ] 00:24:34.252 }' 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:34.252 13:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.512 [2024-11-20 13:44:37.383451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:34.512 [2024-11-20 13:44:37.383550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:34.512 [2024-11-20 13:44:37.383588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:34.512 [2024-11-20 13:44:37.383618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:34.512 [2024-11-20 13:44:37.384225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:34.512 [2024-11-20 13:44:37.384257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:34.512 [2024-11-20 13:44:37.384367] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:34.512 [2024-11-20 13:44:37.384399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:34.512 pt2 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.512 [2024-11-20 13:44:37.391386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:34.512 [2024-11-20 13:44:37.391441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:34.512 [2024-11-20 13:44:37.391467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:24:34.512 [2024-11-20 13:44:37.391480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:34.512 [2024-11-20 13:44:37.391973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:34.512 [2024-11-20 13:44:37.392005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:34.512 [2024-11-20 13:44:37.392093] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:24:34.512 [2024-11-20 13:44:37.392128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:34.512 pt3 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.512 [2024-11-20 13:44:37.403380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:34.512 [2024-11-20 13:44:37.403442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:34.512 [2024-11-20 13:44:37.403476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:34.512 [2024-11-20 13:44:37.403489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:34.512 [2024-11-20 13:44:37.404008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:34.512 [2024-11-20 13:44:37.404040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:34.512 [2024-11-20 13:44:37.404123] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:24:34.512 [2024-11-20 13:44:37.404156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:34.512 [2024-11-20 13:44:37.404343] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:34.512 [2024-11-20 13:44:37.404359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:34.512 [2024-11-20 13:44:37.404673] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:34.512 [2024-11-20 13:44:37.404870] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:34.512 [2024-11-20 13:44:37.404908] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:24:34.512 [2024-11-20 13:44:37.405069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:34.512 pt4 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.512 13:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.771 13:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.771 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:34.771 "name": "raid_bdev1", 00:24:34.771 "uuid": "a61a6370-56be-4153-9ce5-943087b47b33", 00:24:34.771 "strip_size_kb": 64, 00:24:34.771 "state": "online", 00:24:34.771 "raid_level": "raid0", 00:24:34.772 "superblock": true, 00:24:34.772 "num_base_bdevs": 4, 00:24:34.772 "num_base_bdevs_discovered": 4, 00:24:34.772 "num_base_bdevs_operational": 4, 00:24:34.772 "base_bdevs_list": [ 00:24:34.772 { 00:24:34.772 "name": "pt1", 00:24:34.772 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:34.772 "is_configured": true, 00:24:34.772 "data_offset": 2048, 00:24:34.772 "data_size": 63488 00:24:34.772 }, 00:24:34.772 { 00:24:34.772 "name": "pt2", 00:24:34.772 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:34.772 "is_configured": true, 00:24:34.772 "data_offset": 2048, 00:24:34.772 "data_size": 63488 00:24:34.772 }, 00:24:34.772 { 00:24:34.772 "name": "pt3", 00:24:34.772 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:34.772 "is_configured": true, 00:24:34.772 "data_offset": 2048, 00:24:34.772 "data_size": 63488 00:24:34.772 }, 00:24:34.772 { 00:24:34.772 "name": "pt4", 00:24:34.772 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:34.772 "is_configured": true, 00:24:34.772 "data_offset": 2048, 00:24:34.772 "data_size": 63488 00:24:34.772 } 00:24:34.772 ] 00:24:34.772 }' 00:24:34.772 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:34.772 13:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.340 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:24:35.340 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:35.340 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:35.340 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:35.340 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:35.340 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:35.340 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:35.340 13:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.340 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:35.340 13:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.340 [2024-11-20 13:44:37.956047] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:35.340 13:44:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.340 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:35.340 "name": "raid_bdev1", 00:24:35.340 "aliases": [ 00:24:35.340 "a61a6370-56be-4153-9ce5-943087b47b33" 00:24:35.340 ], 00:24:35.340 "product_name": "Raid Volume", 00:24:35.340 "block_size": 512, 00:24:35.340 "num_blocks": 253952, 00:24:35.340 "uuid": "a61a6370-56be-4153-9ce5-943087b47b33", 00:24:35.340 "assigned_rate_limits": { 00:24:35.340 "rw_ios_per_sec": 0, 00:24:35.340 "rw_mbytes_per_sec": 0, 00:24:35.340 "r_mbytes_per_sec": 0, 00:24:35.340 "w_mbytes_per_sec": 0 00:24:35.340 }, 00:24:35.340 "claimed": false, 00:24:35.340 "zoned": false, 00:24:35.340 "supported_io_types": { 00:24:35.340 "read": true, 00:24:35.340 "write": true, 00:24:35.340 "unmap": true, 00:24:35.340 "flush": true, 00:24:35.340 "reset": true, 00:24:35.340 "nvme_admin": false, 00:24:35.340 "nvme_io": false, 00:24:35.340 "nvme_io_md": false, 00:24:35.340 "write_zeroes": true, 00:24:35.340 "zcopy": false, 00:24:35.340 "get_zone_info": false, 00:24:35.340 "zone_management": false, 00:24:35.340 "zone_append": false, 00:24:35.340 "compare": false, 00:24:35.340 "compare_and_write": false, 00:24:35.340 "abort": false, 00:24:35.340 "seek_hole": false, 00:24:35.340 "seek_data": false, 00:24:35.340 "copy": false, 00:24:35.340 "nvme_iov_md": false 00:24:35.340 }, 00:24:35.340 "memory_domains": [ 00:24:35.340 { 00:24:35.340 "dma_device_id": "system", 00:24:35.340 "dma_device_type": 1 00:24:35.340 }, 00:24:35.340 { 00:24:35.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.340 "dma_device_type": 2 00:24:35.340 }, 00:24:35.340 { 00:24:35.340 "dma_device_id": "system", 00:24:35.340 "dma_device_type": 1 00:24:35.340 }, 00:24:35.340 { 00:24:35.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.340 "dma_device_type": 2 00:24:35.340 }, 00:24:35.340 { 00:24:35.340 "dma_device_id": "system", 00:24:35.340 "dma_device_type": 1 00:24:35.340 }, 00:24:35.340 { 00:24:35.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.340 "dma_device_type": 2 00:24:35.340 }, 00:24:35.340 { 00:24:35.340 "dma_device_id": "system", 00:24:35.340 "dma_device_type": 1 00:24:35.340 }, 00:24:35.340 { 00:24:35.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.340 "dma_device_type": 2 00:24:35.340 } 00:24:35.340 ], 00:24:35.340 "driver_specific": { 00:24:35.340 "raid": { 00:24:35.340 "uuid": "a61a6370-56be-4153-9ce5-943087b47b33", 00:24:35.340 "strip_size_kb": 64, 00:24:35.340 "state": "online", 00:24:35.340 "raid_level": "raid0", 00:24:35.340 "superblock": true, 00:24:35.340 "num_base_bdevs": 4, 00:24:35.340 "num_base_bdevs_discovered": 4, 00:24:35.340 "num_base_bdevs_operational": 4, 00:24:35.340 "base_bdevs_list": [ 00:24:35.340 { 00:24:35.340 "name": "pt1", 00:24:35.340 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:35.340 "is_configured": true, 00:24:35.340 "data_offset": 2048, 00:24:35.340 "data_size": 63488 00:24:35.340 }, 00:24:35.340 { 00:24:35.340 "name": "pt2", 00:24:35.340 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:35.340 "is_configured": true, 00:24:35.340 "data_offset": 2048, 00:24:35.340 "data_size": 63488 00:24:35.340 }, 00:24:35.340 { 00:24:35.340 "name": "pt3", 00:24:35.340 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:35.340 "is_configured": true, 00:24:35.340 "data_offset": 2048, 00:24:35.340 "data_size": 63488 00:24:35.340 }, 00:24:35.340 { 00:24:35.340 "name": "pt4", 00:24:35.340 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:35.340 "is_configured": true, 00:24:35.340 "data_offset": 2048, 00:24:35.340 "data_size": 63488 00:24:35.340 } 00:24:35.340 ] 00:24:35.340 } 00:24:35.340 } 00:24:35.340 }' 00:24:35.341 13:44:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:35.341 pt2 00:24:35.341 pt3 00:24:35.341 pt4' 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:35.341 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.599 [2024-11-20 13:44:38.340081] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a61a6370-56be-4153-9ce5-943087b47b33 '!=' a61a6370-56be-4153-9ce5-943087b47b33 ']' 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71015 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71015 ']' 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71015 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71015 00:24:35.599 killing process with pid 71015 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71015' 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 71015 00:24:35.599 [2024-11-20 13:44:38.422144] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:35.599 13:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 71015 00:24:35.599 [2024-11-20 13:44:38.422252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:35.600 [2024-11-20 13:44:38.422365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:35.600 [2024-11-20 13:44:38.422380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:24:35.932 [2024-11-20 13:44:38.793002] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:37.306 13:44:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:24:37.306 00:24:37.306 real 0m6.069s 00:24:37.306 user 0m9.050s 00:24:37.306 sys 0m0.936s 00:24:37.306 13:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:37.306 ************************************ 00:24:37.306 END TEST raid_superblock_test 00:24:37.306 ************************************ 00:24:37.306 13:44:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:37.306 13:44:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:24:37.306 13:44:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:37.306 13:44:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:37.306 13:44:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:37.306 ************************************ 00:24:37.306 START TEST raid_read_error_test 00:24:37.306 ************************************ 00:24:37.306 13:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:24:37.306 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:24:37.306 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.t3j9iNEUVv 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71281 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71281 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71281 ']' 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:37.307 13:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:37.307 [2024-11-20 13:44:40.079139] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:24:37.307 [2024-11-20 13:44:40.079510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71281 ] 00:24:37.566 [2024-11-20 13:44:40.271951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.566 [2024-11-20 13:44:40.441457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.823 [2024-11-20 13:44:40.685289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:37.823 [2024-11-20 13:44:40.685360] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.392 BaseBdev1_malloc 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.392 true 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.392 [2024-11-20 13:44:41.210020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:38.392 [2024-11-20 13:44:41.210235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.392 [2024-11-20 13:44:41.210277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:38.392 [2024-11-20 13:44:41.210297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.392 [2024-11-20 13:44:41.213136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.392 [2024-11-20 13:44:41.213186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:38.392 BaseBdev1 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.392 BaseBdev2_malloc 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.392 true 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.392 [2024-11-20 13:44:41.270273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:38.392 [2024-11-20 13:44:41.270346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.392 [2024-11-20 13:44:41.270371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:38.392 [2024-11-20 13:44:41.270389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.392 [2024-11-20 13:44:41.273393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.392 [2024-11-20 13:44:41.273457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:38.392 BaseBdev2 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.392 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.651 BaseBdev3_malloc 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.651 true 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.651 [2024-11-20 13:44:41.337451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:38.651 [2024-11-20 13:44:41.337523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.651 [2024-11-20 13:44:41.337553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:38.651 [2024-11-20 13:44:41.337576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.651 [2024-11-20 13:44:41.340446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.651 [2024-11-20 13:44:41.340503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:38.651 BaseBdev3 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.651 BaseBdev4_malloc 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.651 true 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.651 [2024-11-20 13:44:41.399713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:38.651 [2024-11-20 13:44:41.399796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:38.651 [2024-11-20 13:44:41.399823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:38.651 [2024-11-20 13:44:41.399840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:38.651 [2024-11-20 13:44:41.402896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:38.651 [2024-11-20 13:44:41.402988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:38.651 BaseBdev4 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.651 [2024-11-20 13:44:41.411930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:38.651 [2024-11-20 13:44:41.414417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:38.651 [2024-11-20 13:44:41.414539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:38.651 [2024-11-20 13:44:41.414642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:38.651 [2024-11-20 13:44:41.414996] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:24:38.651 [2024-11-20 13:44:41.415050] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:38.651 [2024-11-20 13:44:41.415357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:24:38.651 [2024-11-20 13:44:41.415646] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:24:38.651 [2024-11-20 13:44:41.415666] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:24:38.651 [2024-11-20 13:44:41.415908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.651 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:38.651 "name": "raid_bdev1", 00:24:38.651 "uuid": "2b0a6234-ad82-4b40-bcbc-542b573e8d89", 00:24:38.651 "strip_size_kb": 64, 00:24:38.652 "state": "online", 00:24:38.652 "raid_level": "raid0", 00:24:38.652 "superblock": true, 00:24:38.652 "num_base_bdevs": 4, 00:24:38.652 "num_base_bdevs_discovered": 4, 00:24:38.652 "num_base_bdevs_operational": 4, 00:24:38.652 "base_bdevs_list": [ 00:24:38.652 { 00:24:38.652 "name": "BaseBdev1", 00:24:38.652 "uuid": "94bde3e3-21c2-56d8-b6c5-25ebca50c0b0", 00:24:38.652 "is_configured": true, 00:24:38.652 "data_offset": 2048, 00:24:38.652 "data_size": 63488 00:24:38.652 }, 00:24:38.652 { 00:24:38.652 "name": "BaseBdev2", 00:24:38.652 "uuid": "7074219f-22d2-5a19-b8bc-7f0b2033c8e1", 00:24:38.652 "is_configured": true, 00:24:38.652 "data_offset": 2048, 00:24:38.652 "data_size": 63488 00:24:38.652 }, 00:24:38.652 { 00:24:38.652 "name": "BaseBdev3", 00:24:38.652 "uuid": "06122435-bb74-5b93-960c-7eba70b4ec9d", 00:24:38.652 "is_configured": true, 00:24:38.652 "data_offset": 2048, 00:24:38.652 "data_size": 63488 00:24:38.652 }, 00:24:38.652 { 00:24:38.652 "name": "BaseBdev4", 00:24:38.652 "uuid": "4ca625d5-3a61-5c91-bbfc-51dba44692f1", 00:24:38.652 "is_configured": true, 00:24:38.652 "data_offset": 2048, 00:24:38.652 "data_size": 63488 00:24:38.652 } 00:24:38.652 ] 00:24:38.652 }' 00:24:38.652 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:38.652 13:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:39.222 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:24:39.222 13:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:39.222 [2024-11-20 13:44:42.082074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:24:40.157 13:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:24:40.157 13:44:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.157 13:44:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.157 13:44:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.157 13:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:24:40.157 13:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:24:40.157 13:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:24:40.157 13:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:40.157 13:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:40.157 13:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:40.157 13:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:40.157 13:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:40.157 13:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:40.157 13:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:40.157 13:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:40.157 13:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:40.157 13:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:40.157 13:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.157 13:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.157 13:44:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.157 13:44:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.157 13:44:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.157 13:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:40.157 "name": "raid_bdev1", 00:24:40.157 "uuid": "2b0a6234-ad82-4b40-bcbc-542b573e8d89", 00:24:40.157 "strip_size_kb": 64, 00:24:40.157 "state": "online", 00:24:40.157 "raid_level": "raid0", 00:24:40.157 "superblock": true, 00:24:40.157 "num_base_bdevs": 4, 00:24:40.157 "num_base_bdevs_discovered": 4, 00:24:40.157 "num_base_bdevs_operational": 4, 00:24:40.157 "base_bdevs_list": [ 00:24:40.157 { 00:24:40.157 "name": "BaseBdev1", 00:24:40.157 "uuid": "94bde3e3-21c2-56d8-b6c5-25ebca50c0b0", 00:24:40.157 "is_configured": true, 00:24:40.157 "data_offset": 2048, 00:24:40.157 "data_size": 63488 00:24:40.157 }, 00:24:40.157 { 00:24:40.157 "name": "BaseBdev2", 00:24:40.157 "uuid": "7074219f-22d2-5a19-b8bc-7f0b2033c8e1", 00:24:40.157 "is_configured": true, 00:24:40.157 "data_offset": 2048, 00:24:40.157 "data_size": 63488 00:24:40.157 }, 00:24:40.157 { 00:24:40.157 "name": "BaseBdev3", 00:24:40.157 "uuid": "06122435-bb74-5b93-960c-7eba70b4ec9d", 00:24:40.157 "is_configured": true, 00:24:40.157 "data_offset": 2048, 00:24:40.157 "data_size": 63488 00:24:40.157 }, 00:24:40.157 { 00:24:40.157 "name": "BaseBdev4", 00:24:40.157 "uuid": "4ca625d5-3a61-5c91-bbfc-51dba44692f1", 00:24:40.157 "is_configured": true, 00:24:40.157 "data_offset": 2048, 00:24:40.157 "data_size": 63488 00:24:40.157 } 00:24:40.157 ] 00:24:40.158 }' 00:24:40.158 13:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:40.158 13:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.723 13:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:40.723 13:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.723 13:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.723 [2024-11-20 13:44:43.455607] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:40.723 [2024-11-20 13:44:43.455814] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:40.723 [2024-11-20 13:44:43.459441] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:40.723 [2024-11-20 13:44:43.459525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:40.723 [2024-11-20 13:44:43.459598] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:40.723 [2024-11-20 13:44:43.459617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:24:40.723 { 00:24:40.723 "results": [ 00:24:40.723 { 00:24:40.723 "job": "raid_bdev1", 00:24:40.723 "core_mask": "0x1", 00:24:40.723 "workload": "randrw", 00:24:40.723 "percentage": 50, 00:24:40.723 "status": "finished", 00:24:40.723 "queue_depth": 1, 00:24:40.723 "io_size": 131072, 00:24:40.723 "runtime": 1.370992, 00:24:40.723 "iops": 9885.542731102734, 00:24:40.723 "mibps": 1235.6928413878418, 00:24:40.723 "io_failed": 1, 00:24:40.723 "io_timeout": 0, 00:24:40.723 "avg_latency_us": 141.17805733295774, 00:24:40.723 "min_latency_us": 39.56363636363636, 00:24:40.723 "max_latency_us": 1839.4763636363637 00:24:40.723 } 00:24:40.723 ], 00:24:40.723 "core_count": 1 00:24:40.723 } 00:24:40.723 13:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.723 13:44:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71281 00:24:40.723 13:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71281 ']' 00:24:40.723 13:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71281 00:24:40.723 13:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:24:40.723 13:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:40.723 13:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71281 00:24:40.723 killing process with pid 71281 00:24:40.723 13:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:40.723 13:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:40.723 13:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71281' 00:24:40.724 13:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71281 00:24:40.724 [2024-11-20 13:44:43.491967] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:40.724 13:44:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71281 00:24:40.980 [2024-11-20 13:44:43.803386] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:42.355 13:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.t3j9iNEUVv 00:24:42.355 13:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:24:42.355 13:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:24:42.355 13:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:24:42.355 13:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:24:42.355 13:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:42.355 13:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:24:42.355 13:44:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:24:42.355 00:24:42.355 real 0m5.036s 00:24:42.355 user 0m6.231s 00:24:42.355 sys 0m0.597s 00:24:42.355 13:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:42.355 ************************************ 00:24:42.355 END TEST raid_read_error_test 00:24:42.355 ************************************ 00:24:42.355 13:44:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.355 13:44:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:24:42.355 13:44:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:42.355 13:44:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:42.355 13:44:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:42.355 ************************************ 00:24:42.355 START TEST raid_write_error_test 00:24:42.355 ************************************ 00:24:42.355 13:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:24:42.355 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:24:42.355 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:24:42.355 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:24:42.355 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:24:42.355 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:42.355 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:24:42.355 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:42.355 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:42.355 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:24:42.355 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:42.355 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:42.355 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:24:42.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.63tJJJshfS 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71432 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71432 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71432 ']' 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:42.356 13:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:42.356 [2024-11-20 13:44:45.188541] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:24:42.356 [2024-11-20 13:44:45.188761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71432 ] 00:24:42.614 [2024-11-20 13:44:45.384602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.873 [2024-11-20 13:44:45.573321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.131 [2024-11-20 13:44:45.817235] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:43.131 [2024-11-20 13:44:45.817370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:43.390 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.390 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:24:43.390 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:43.390 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:43.390 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.390 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.390 BaseBdev1_malloc 00:24:43.390 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.390 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:24:43.390 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.390 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.390 true 00:24:43.390 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.390 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:43.390 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.390 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.390 [2024-11-20 13:44:46.277911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:43.390 [2024-11-20 13:44:46.278158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:43.390 [2024-11-20 13:44:46.278210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:43.390 [2024-11-20 13:44:46.278234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:43.390 [2024-11-20 13:44:46.281162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:43.390 [2024-11-20 13:44:46.281354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:43.390 BaseBdev1 00:24:43.390 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.390 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:43.390 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:43.390 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.390 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.649 BaseBdev2_malloc 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.649 true 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.649 [2024-11-20 13:44:46.342666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:43.649 [2024-11-20 13:44:46.342752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:43.649 [2024-11-20 13:44:46.342785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:43.649 [2024-11-20 13:44:46.342806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:43.649 [2024-11-20 13:44:46.345842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:43.649 [2024-11-20 13:44:46.345931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:43.649 BaseBdev2 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.649 BaseBdev3_malloc 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.649 true 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.649 [2024-11-20 13:44:46.410980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:43.649 [2024-11-20 13:44:46.411082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:43.649 [2024-11-20 13:44:46.411116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:43.649 [2024-11-20 13:44:46.411138] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:43.649 [2024-11-20 13:44:46.414044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:43.649 [2024-11-20 13:44:46.414102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:43.649 BaseBdev3 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.649 BaseBdev4_malloc 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.649 true 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.649 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.649 [2024-11-20 13:44:46.472466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:43.649 [2024-11-20 13:44:46.472695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:43.650 [2024-11-20 13:44:46.472741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:43.650 [2024-11-20 13:44:46.472766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:43.650 [2024-11-20 13:44:46.475860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:43.650 [2024-11-20 13:44:46.476090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:43.650 BaseBdev4 00:24:43.650 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.650 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:24:43.650 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.650 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.650 [2024-11-20 13:44:46.480559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:43.650 [2024-11-20 13:44:46.483282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:43.650 [2024-11-20 13:44:46.483560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:43.650 [2024-11-20 13:44:46.483690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:43.650 [2024-11-20 13:44:46.484053] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:24:43.650 [2024-11-20 13:44:46.484087] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:43.650 [2024-11-20 13:44:46.484463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:24:43.650 [2024-11-20 13:44:46.484753] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:24:43.650 [2024-11-20 13:44:46.484775] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:24:43.650 [2024-11-20 13:44:46.485072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:43.650 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.650 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:43.650 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:43.650 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:43.650 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:43.650 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:43.650 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:43.650 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:43.650 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:43.650 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:43.650 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:43.650 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.650 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.650 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.650 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.650 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.650 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:43.650 "name": "raid_bdev1", 00:24:43.650 "uuid": "453c23ad-1345-4cc9-b46b-05a858c9c91c", 00:24:43.650 "strip_size_kb": 64, 00:24:43.650 "state": "online", 00:24:43.650 "raid_level": "raid0", 00:24:43.650 "superblock": true, 00:24:43.650 "num_base_bdevs": 4, 00:24:43.650 "num_base_bdevs_discovered": 4, 00:24:43.650 "num_base_bdevs_operational": 4, 00:24:43.650 "base_bdevs_list": [ 00:24:43.650 { 00:24:43.650 "name": "BaseBdev1", 00:24:43.650 "uuid": "02c2284b-6959-5184-878c-7422a90853a3", 00:24:43.650 "is_configured": true, 00:24:43.650 "data_offset": 2048, 00:24:43.650 "data_size": 63488 00:24:43.650 }, 00:24:43.650 { 00:24:43.650 "name": "BaseBdev2", 00:24:43.650 "uuid": "9edd70a9-8b52-5739-83c3-6473779e615d", 00:24:43.650 "is_configured": true, 00:24:43.650 "data_offset": 2048, 00:24:43.650 "data_size": 63488 00:24:43.650 }, 00:24:43.650 { 00:24:43.650 "name": "BaseBdev3", 00:24:43.650 "uuid": "98f8eee8-63f9-5502-9395-b5669719d13f", 00:24:43.650 "is_configured": true, 00:24:43.650 "data_offset": 2048, 00:24:43.650 "data_size": 63488 00:24:43.650 }, 00:24:43.650 { 00:24:43.650 "name": "BaseBdev4", 00:24:43.650 "uuid": "47907f42-1c9d-5491-8973-d9b4c9853942", 00:24:43.650 "is_configured": true, 00:24:43.650 "data_offset": 2048, 00:24:43.650 "data_size": 63488 00:24:43.650 } 00:24:43.650 ] 00:24:43.650 }' 00:24:43.650 13:44:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:43.650 13:44:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:44.217 13:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:24:44.217 13:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:24:44.495 [2024-11-20 13:44:47.158813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:24:45.430 13:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:45.431 "name": "raid_bdev1", 00:24:45.431 "uuid": "453c23ad-1345-4cc9-b46b-05a858c9c91c", 00:24:45.431 "strip_size_kb": 64, 00:24:45.431 "state": "online", 00:24:45.431 "raid_level": "raid0", 00:24:45.431 "superblock": true, 00:24:45.431 "num_base_bdevs": 4, 00:24:45.431 "num_base_bdevs_discovered": 4, 00:24:45.431 "num_base_bdevs_operational": 4, 00:24:45.431 "base_bdevs_list": [ 00:24:45.431 { 00:24:45.431 "name": "BaseBdev1", 00:24:45.431 "uuid": "02c2284b-6959-5184-878c-7422a90853a3", 00:24:45.431 "is_configured": true, 00:24:45.431 "data_offset": 2048, 00:24:45.431 "data_size": 63488 00:24:45.431 }, 00:24:45.431 { 00:24:45.431 "name": "BaseBdev2", 00:24:45.431 "uuid": "9edd70a9-8b52-5739-83c3-6473779e615d", 00:24:45.431 "is_configured": true, 00:24:45.431 "data_offset": 2048, 00:24:45.431 "data_size": 63488 00:24:45.431 }, 00:24:45.431 { 00:24:45.431 "name": "BaseBdev3", 00:24:45.431 "uuid": "98f8eee8-63f9-5502-9395-b5669719d13f", 00:24:45.431 "is_configured": true, 00:24:45.431 "data_offset": 2048, 00:24:45.431 "data_size": 63488 00:24:45.431 }, 00:24:45.431 { 00:24:45.431 "name": "BaseBdev4", 00:24:45.431 "uuid": "47907f42-1c9d-5491-8973-d9b4c9853942", 00:24:45.431 "is_configured": true, 00:24:45.431 "data_offset": 2048, 00:24:45.431 "data_size": 63488 00:24:45.431 } 00:24:45.431 ] 00:24:45.431 }' 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:45.431 13:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.690 13:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:45.690 13:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.690 13:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.690 [2024-11-20 13:44:48.586037] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:45.690 [2024-11-20 13:44:48.586236] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:45.690 [2024-11-20 13:44:48.589722] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:45.690 [2024-11-20 13:44:48.589803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:45.690 [2024-11-20 13:44:48.589869] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:45.690 [2024-11-20 13:44:48.589907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:24:45.690 { 00:24:45.690 "results": [ 00:24:45.690 { 00:24:45.690 "job": "raid_bdev1", 00:24:45.690 "core_mask": "0x1", 00:24:45.690 "workload": "randrw", 00:24:45.690 "percentage": 50, 00:24:45.690 "status": "finished", 00:24:45.690 "queue_depth": 1, 00:24:45.690 "io_size": 131072, 00:24:45.690 "runtime": 1.424395, 00:24:45.690 "iops": 9350.636586059345, 00:24:45.690 "mibps": 1168.8295732574181, 00:24:45.690 "io_failed": 1, 00:24:45.690 "io_timeout": 0, 00:24:45.690 "avg_latency_us": 149.05239202839203, 00:24:45.690 "min_latency_us": 41.89090909090909, 00:24:45.690 "max_latency_us": 1921.3963636363637 00:24:45.690 } 00:24:45.690 ], 00:24:45.690 "core_count": 1 00:24:45.690 } 00:24:45.690 13:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.690 13:44:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71432 00:24:45.690 13:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71432 ']' 00:24:45.690 13:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71432 00:24:45.690 13:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:24:45.690 13:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:45.690 13:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71432 00:24:45.948 killing process with pid 71432 00:24:45.948 13:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:45.948 13:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:45.948 13:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71432' 00:24:45.948 13:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71432 00:24:45.948 13:44:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71432 00:24:45.948 [2024-11-20 13:44:48.623101] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:46.207 [2024-11-20 13:44:48.930756] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:47.579 13:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.63tJJJshfS 00:24:47.579 13:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:24:47.579 13:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:24:47.579 13:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:24:47.579 13:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:24:47.579 13:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:47.579 13:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:24:47.579 13:44:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:24:47.579 00:24:47.579 real 0m5.033s 00:24:47.579 user 0m6.198s 00:24:47.579 sys 0m0.663s 00:24:47.579 13:44:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:47.579 ************************************ 00:24:47.579 END TEST raid_write_error_test 00:24:47.579 ************************************ 00:24:47.579 13:44:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.579 13:44:50 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:24:47.579 13:44:50 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:24:47.579 13:44:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:47.579 13:44:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:47.579 13:44:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:47.579 ************************************ 00:24:47.579 START TEST raid_state_function_test 00:24:47.579 ************************************ 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71580 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:47.579 Process raid pid: 71580 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71580' 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71580 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71580 ']' 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.579 13:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.579 [2024-11-20 13:44:50.284464] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:24:47.579 [2024-11-20 13:44:50.284984] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.579 [2024-11-20 13:44:50.467421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.838 [2024-11-20 13:44:50.602295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.097 [2024-11-20 13:44:50.817260] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:48.097 [2024-11-20 13:44:50.817315] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:48.664 [2024-11-20 13:44:51.349380] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:48.664 [2024-11-20 13:44:51.349490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:48.664 [2024-11-20 13:44:51.349508] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:48.664 [2024-11-20 13:44:51.349525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:48.664 [2024-11-20 13:44:51.349534] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:48.664 [2024-11-20 13:44:51.349548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:48.664 [2024-11-20 13:44:51.349558] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:48.664 [2024-11-20 13:44:51.349571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:48.664 "name": "Existed_Raid", 00:24:48.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.664 "strip_size_kb": 64, 00:24:48.664 "state": "configuring", 00:24:48.664 "raid_level": "concat", 00:24:48.664 "superblock": false, 00:24:48.664 "num_base_bdevs": 4, 00:24:48.664 "num_base_bdevs_discovered": 0, 00:24:48.664 "num_base_bdevs_operational": 4, 00:24:48.664 "base_bdevs_list": [ 00:24:48.664 { 00:24:48.664 "name": "BaseBdev1", 00:24:48.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.664 "is_configured": false, 00:24:48.664 "data_offset": 0, 00:24:48.664 "data_size": 0 00:24:48.664 }, 00:24:48.664 { 00:24:48.664 "name": "BaseBdev2", 00:24:48.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.664 "is_configured": false, 00:24:48.664 "data_offset": 0, 00:24:48.664 "data_size": 0 00:24:48.664 }, 00:24:48.664 { 00:24:48.664 "name": "BaseBdev3", 00:24:48.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.664 "is_configured": false, 00:24:48.664 "data_offset": 0, 00:24:48.664 "data_size": 0 00:24:48.664 }, 00:24:48.664 { 00:24:48.664 "name": "BaseBdev4", 00:24:48.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.664 "is_configured": false, 00:24:48.664 "data_offset": 0, 00:24:48.664 "data_size": 0 00:24:48.664 } 00:24:48.664 ] 00:24:48.664 }' 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:48.664 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.232 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:49.232 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.232 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.232 [2024-11-20 13:44:51.865458] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:49.232 [2024-11-20 13:44:51.865507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:49.232 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.232 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:49.232 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.232 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.232 [2024-11-20 13:44:51.873435] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:49.232 [2024-11-20 13:44:51.873491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:49.232 [2024-11-20 13:44:51.873507] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:49.232 [2024-11-20 13:44:51.873523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:49.232 [2024-11-20 13:44:51.873532] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:49.232 [2024-11-20 13:44:51.873555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:49.232 [2024-11-20 13:44:51.873565] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:49.232 [2024-11-20 13:44:51.873579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:49.232 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.232 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:24:49.232 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.232 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.232 [2024-11-20 13:44:51.919232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:49.232 BaseBdev1 00:24:49.232 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.232 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:49.232 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:24:49.232 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:49.232 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:49.232 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:49.232 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.233 [ 00:24:49.233 { 00:24:49.233 "name": "BaseBdev1", 00:24:49.233 "aliases": [ 00:24:49.233 "13ad1ef5-b422-4fc1-bba1-cd2346b97dfd" 00:24:49.233 ], 00:24:49.233 "product_name": "Malloc disk", 00:24:49.233 "block_size": 512, 00:24:49.233 "num_blocks": 65536, 00:24:49.233 "uuid": "13ad1ef5-b422-4fc1-bba1-cd2346b97dfd", 00:24:49.233 "assigned_rate_limits": { 00:24:49.233 "rw_ios_per_sec": 0, 00:24:49.233 "rw_mbytes_per_sec": 0, 00:24:49.233 "r_mbytes_per_sec": 0, 00:24:49.233 "w_mbytes_per_sec": 0 00:24:49.233 }, 00:24:49.233 "claimed": true, 00:24:49.233 "claim_type": "exclusive_write", 00:24:49.233 "zoned": false, 00:24:49.233 "supported_io_types": { 00:24:49.233 "read": true, 00:24:49.233 "write": true, 00:24:49.233 "unmap": true, 00:24:49.233 "flush": true, 00:24:49.233 "reset": true, 00:24:49.233 "nvme_admin": false, 00:24:49.233 "nvme_io": false, 00:24:49.233 "nvme_io_md": false, 00:24:49.233 "write_zeroes": true, 00:24:49.233 "zcopy": true, 00:24:49.233 "get_zone_info": false, 00:24:49.233 "zone_management": false, 00:24:49.233 "zone_append": false, 00:24:49.233 "compare": false, 00:24:49.233 "compare_and_write": false, 00:24:49.233 "abort": true, 00:24:49.233 "seek_hole": false, 00:24:49.233 "seek_data": false, 00:24:49.233 "copy": true, 00:24:49.233 "nvme_iov_md": false 00:24:49.233 }, 00:24:49.233 "memory_domains": [ 00:24:49.233 { 00:24:49.233 "dma_device_id": "system", 00:24:49.233 "dma_device_type": 1 00:24:49.233 }, 00:24:49.233 { 00:24:49.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:49.233 "dma_device_type": 2 00:24:49.233 } 00:24:49.233 ], 00:24:49.233 "driver_specific": {} 00:24:49.233 } 00:24:49.233 ] 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:49.233 13:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.233 13:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:49.233 "name": "Existed_Raid", 00:24:49.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.233 "strip_size_kb": 64, 00:24:49.233 "state": "configuring", 00:24:49.233 "raid_level": "concat", 00:24:49.233 "superblock": false, 00:24:49.233 "num_base_bdevs": 4, 00:24:49.233 "num_base_bdevs_discovered": 1, 00:24:49.233 "num_base_bdevs_operational": 4, 00:24:49.233 "base_bdevs_list": [ 00:24:49.233 { 00:24:49.233 "name": "BaseBdev1", 00:24:49.233 "uuid": "13ad1ef5-b422-4fc1-bba1-cd2346b97dfd", 00:24:49.233 "is_configured": true, 00:24:49.233 "data_offset": 0, 00:24:49.233 "data_size": 65536 00:24:49.233 }, 00:24:49.233 { 00:24:49.233 "name": "BaseBdev2", 00:24:49.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.233 "is_configured": false, 00:24:49.233 "data_offset": 0, 00:24:49.233 "data_size": 0 00:24:49.233 }, 00:24:49.233 { 00:24:49.233 "name": "BaseBdev3", 00:24:49.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.233 "is_configured": false, 00:24:49.233 "data_offset": 0, 00:24:49.233 "data_size": 0 00:24:49.233 }, 00:24:49.233 { 00:24:49.233 "name": "BaseBdev4", 00:24:49.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.233 "is_configured": false, 00:24:49.233 "data_offset": 0, 00:24:49.233 "data_size": 0 00:24:49.233 } 00:24:49.233 ] 00:24:49.233 }' 00:24:49.233 13:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:49.233 13:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.802 [2024-11-20 13:44:52.491426] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:49.802 [2024-11-20 13:44:52.491646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.802 [2024-11-20 13:44:52.499489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:49.802 [2024-11-20 13:44:52.501989] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:49.802 [2024-11-20 13:44:52.502043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:49.802 [2024-11-20 13:44:52.502060] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:49.802 [2024-11-20 13:44:52.502077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:49.802 [2024-11-20 13:44:52.502087] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:49.802 [2024-11-20 13:44:52.502101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:49.802 "name": "Existed_Raid", 00:24:49.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.802 "strip_size_kb": 64, 00:24:49.802 "state": "configuring", 00:24:49.802 "raid_level": "concat", 00:24:49.802 "superblock": false, 00:24:49.802 "num_base_bdevs": 4, 00:24:49.802 "num_base_bdevs_discovered": 1, 00:24:49.802 "num_base_bdevs_operational": 4, 00:24:49.802 "base_bdevs_list": [ 00:24:49.802 { 00:24:49.802 "name": "BaseBdev1", 00:24:49.802 "uuid": "13ad1ef5-b422-4fc1-bba1-cd2346b97dfd", 00:24:49.802 "is_configured": true, 00:24:49.802 "data_offset": 0, 00:24:49.802 "data_size": 65536 00:24:49.802 }, 00:24:49.802 { 00:24:49.802 "name": "BaseBdev2", 00:24:49.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.802 "is_configured": false, 00:24:49.802 "data_offset": 0, 00:24:49.802 "data_size": 0 00:24:49.802 }, 00:24:49.802 { 00:24:49.802 "name": "BaseBdev3", 00:24:49.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.802 "is_configured": false, 00:24:49.802 "data_offset": 0, 00:24:49.802 "data_size": 0 00:24:49.802 }, 00:24:49.802 { 00:24:49.802 "name": "BaseBdev4", 00:24:49.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.802 "is_configured": false, 00:24:49.802 "data_offset": 0, 00:24:49.802 "data_size": 0 00:24:49.802 } 00:24:49.802 ] 00:24:49.802 }' 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:49.802 13:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.372 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:50.372 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.372 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.372 [2024-11-20 13:44:53.052336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:50.372 BaseBdev2 00:24:50.372 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.372 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:50.372 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:24:50.372 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:50.372 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:50.372 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.373 [ 00:24:50.373 { 00:24:50.373 "name": "BaseBdev2", 00:24:50.373 "aliases": [ 00:24:50.373 "4f9b38e0-43da-4f17-b313-01c3dd3ab0fe" 00:24:50.373 ], 00:24:50.373 "product_name": "Malloc disk", 00:24:50.373 "block_size": 512, 00:24:50.373 "num_blocks": 65536, 00:24:50.373 "uuid": "4f9b38e0-43da-4f17-b313-01c3dd3ab0fe", 00:24:50.373 "assigned_rate_limits": { 00:24:50.373 "rw_ios_per_sec": 0, 00:24:50.373 "rw_mbytes_per_sec": 0, 00:24:50.373 "r_mbytes_per_sec": 0, 00:24:50.373 "w_mbytes_per_sec": 0 00:24:50.373 }, 00:24:50.373 "claimed": true, 00:24:50.373 "claim_type": "exclusive_write", 00:24:50.373 "zoned": false, 00:24:50.373 "supported_io_types": { 00:24:50.373 "read": true, 00:24:50.373 "write": true, 00:24:50.373 "unmap": true, 00:24:50.373 "flush": true, 00:24:50.373 "reset": true, 00:24:50.373 "nvme_admin": false, 00:24:50.373 "nvme_io": false, 00:24:50.373 "nvme_io_md": false, 00:24:50.373 "write_zeroes": true, 00:24:50.373 "zcopy": true, 00:24:50.373 "get_zone_info": false, 00:24:50.373 "zone_management": false, 00:24:50.373 "zone_append": false, 00:24:50.373 "compare": false, 00:24:50.373 "compare_and_write": false, 00:24:50.373 "abort": true, 00:24:50.373 "seek_hole": false, 00:24:50.373 "seek_data": false, 00:24:50.373 "copy": true, 00:24:50.373 "nvme_iov_md": false 00:24:50.373 }, 00:24:50.373 "memory_domains": [ 00:24:50.373 { 00:24:50.373 "dma_device_id": "system", 00:24:50.373 "dma_device_type": 1 00:24:50.373 }, 00:24:50.373 { 00:24:50.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:50.373 "dma_device_type": 2 00:24:50.373 } 00:24:50.373 ], 00:24:50.373 "driver_specific": {} 00:24:50.373 } 00:24:50.373 ] 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:50.373 "name": "Existed_Raid", 00:24:50.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.373 "strip_size_kb": 64, 00:24:50.373 "state": "configuring", 00:24:50.373 "raid_level": "concat", 00:24:50.373 "superblock": false, 00:24:50.373 "num_base_bdevs": 4, 00:24:50.373 "num_base_bdevs_discovered": 2, 00:24:50.373 "num_base_bdevs_operational": 4, 00:24:50.373 "base_bdevs_list": [ 00:24:50.373 { 00:24:50.373 "name": "BaseBdev1", 00:24:50.373 "uuid": "13ad1ef5-b422-4fc1-bba1-cd2346b97dfd", 00:24:50.373 "is_configured": true, 00:24:50.373 "data_offset": 0, 00:24:50.373 "data_size": 65536 00:24:50.373 }, 00:24:50.373 { 00:24:50.373 "name": "BaseBdev2", 00:24:50.373 "uuid": "4f9b38e0-43da-4f17-b313-01c3dd3ab0fe", 00:24:50.373 "is_configured": true, 00:24:50.373 "data_offset": 0, 00:24:50.373 "data_size": 65536 00:24:50.373 }, 00:24:50.373 { 00:24:50.373 "name": "BaseBdev3", 00:24:50.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.373 "is_configured": false, 00:24:50.373 "data_offset": 0, 00:24:50.373 "data_size": 0 00:24:50.373 }, 00:24:50.373 { 00:24:50.373 "name": "BaseBdev4", 00:24:50.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.373 "is_configured": false, 00:24:50.373 "data_offset": 0, 00:24:50.373 "data_size": 0 00:24:50.373 } 00:24:50.373 ] 00:24:50.373 }' 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:50.373 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.941 [2024-11-20 13:44:53.603907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:50.941 BaseBdev3 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.941 [ 00:24:50.941 { 00:24:50.941 "name": "BaseBdev3", 00:24:50.941 "aliases": [ 00:24:50.941 "e6200769-c344-4dbd-a8ed-c9fc6d41866c" 00:24:50.941 ], 00:24:50.941 "product_name": "Malloc disk", 00:24:50.941 "block_size": 512, 00:24:50.941 "num_blocks": 65536, 00:24:50.941 "uuid": "e6200769-c344-4dbd-a8ed-c9fc6d41866c", 00:24:50.941 "assigned_rate_limits": { 00:24:50.941 "rw_ios_per_sec": 0, 00:24:50.941 "rw_mbytes_per_sec": 0, 00:24:50.941 "r_mbytes_per_sec": 0, 00:24:50.941 "w_mbytes_per_sec": 0 00:24:50.941 }, 00:24:50.941 "claimed": true, 00:24:50.941 "claim_type": "exclusive_write", 00:24:50.941 "zoned": false, 00:24:50.941 "supported_io_types": { 00:24:50.941 "read": true, 00:24:50.941 "write": true, 00:24:50.941 "unmap": true, 00:24:50.941 "flush": true, 00:24:50.941 "reset": true, 00:24:50.941 "nvme_admin": false, 00:24:50.941 "nvme_io": false, 00:24:50.941 "nvme_io_md": false, 00:24:50.941 "write_zeroes": true, 00:24:50.941 "zcopy": true, 00:24:50.941 "get_zone_info": false, 00:24:50.941 "zone_management": false, 00:24:50.941 "zone_append": false, 00:24:50.941 "compare": false, 00:24:50.941 "compare_and_write": false, 00:24:50.941 "abort": true, 00:24:50.941 "seek_hole": false, 00:24:50.941 "seek_data": false, 00:24:50.941 "copy": true, 00:24:50.941 "nvme_iov_md": false 00:24:50.941 }, 00:24:50.941 "memory_domains": [ 00:24:50.941 { 00:24:50.941 "dma_device_id": "system", 00:24:50.941 "dma_device_type": 1 00:24:50.941 }, 00:24:50.941 { 00:24:50.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:50.941 "dma_device_type": 2 00:24:50.941 } 00:24:50.941 ], 00:24:50.941 "driver_specific": {} 00:24:50.941 } 00:24:50.941 ] 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:50.941 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:50.942 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.942 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:50.942 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.942 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.942 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:50.942 "name": "Existed_Raid", 00:24:50.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.942 "strip_size_kb": 64, 00:24:50.942 "state": "configuring", 00:24:50.942 "raid_level": "concat", 00:24:50.942 "superblock": false, 00:24:50.942 "num_base_bdevs": 4, 00:24:50.942 "num_base_bdevs_discovered": 3, 00:24:50.942 "num_base_bdevs_operational": 4, 00:24:50.942 "base_bdevs_list": [ 00:24:50.942 { 00:24:50.942 "name": "BaseBdev1", 00:24:50.942 "uuid": "13ad1ef5-b422-4fc1-bba1-cd2346b97dfd", 00:24:50.942 "is_configured": true, 00:24:50.942 "data_offset": 0, 00:24:50.942 "data_size": 65536 00:24:50.942 }, 00:24:50.942 { 00:24:50.942 "name": "BaseBdev2", 00:24:50.942 "uuid": "4f9b38e0-43da-4f17-b313-01c3dd3ab0fe", 00:24:50.942 "is_configured": true, 00:24:50.942 "data_offset": 0, 00:24:50.942 "data_size": 65536 00:24:50.942 }, 00:24:50.942 { 00:24:50.942 "name": "BaseBdev3", 00:24:50.942 "uuid": "e6200769-c344-4dbd-a8ed-c9fc6d41866c", 00:24:50.942 "is_configured": true, 00:24:50.942 "data_offset": 0, 00:24:50.942 "data_size": 65536 00:24:50.942 }, 00:24:50.942 { 00:24:50.942 "name": "BaseBdev4", 00:24:50.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.942 "is_configured": false, 00:24:50.942 "data_offset": 0, 00:24:50.942 "data_size": 0 00:24:50.942 } 00:24:50.942 ] 00:24:50.942 }' 00:24:50.942 13:44:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:50.942 13:44:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:51.511 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:24:51.511 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:51.512 [2024-11-20 13:44:54.199840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:51.512 [2024-11-20 13:44:54.200305] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:51.512 [2024-11-20 13:44:54.200341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:24:51.512 [2024-11-20 13:44:54.200735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:51.512 [2024-11-20 13:44:54.200986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:51.512 [2024-11-20 13:44:54.201008] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:24:51.512 [2024-11-20 13:44:54.201343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:51.512 BaseBdev4 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:51.512 [ 00:24:51.512 { 00:24:51.512 "name": "BaseBdev4", 00:24:51.512 "aliases": [ 00:24:51.512 "01f081b8-e676-45df-8f0a-1eea8370d941" 00:24:51.512 ], 00:24:51.512 "product_name": "Malloc disk", 00:24:51.512 "block_size": 512, 00:24:51.512 "num_blocks": 65536, 00:24:51.512 "uuid": "01f081b8-e676-45df-8f0a-1eea8370d941", 00:24:51.512 "assigned_rate_limits": { 00:24:51.512 "rw_ios_per_sec": 0, 00:24:51.512 "rw_mbytes_per_sec": 0, 00:24:51.512 "r_mbytes_per_sec": 0, 00:24:51.512 "w_mbytes_per_sec": 0 00:24:51.512 }, 00:24:51.512 "claimed": true, 00:24:51.512 "claim_type": "exclusive_write", 00:24:51.512 "zoned": false, 00:24:51.512 "supported_io_types": { 00:24:51.512 "read": true, 00:24:51.512 "write": true, 00:24:51.512 "unmap": true, 00:24:51.512 "flush": true, 00:24:51.512 "reset": true, 00:24:51.512 "nvme_admin": false, 00:24:51.512 "nvme_io": false, 00:24:51.512 "nvme_io_md": false, 00:24:51.512 "write_zeroes": true, 00:24:51.512 "zcopy": true, 00:24:51.512 "get_zone_info": false, 00:24:51.512 "zone_management": false, 00:24:51.512 "zone_append": false, 00:24:51.512 "compare": false, 00:24:51.512 "compare_and_write": false, 00:24:51.512 "abort": true, 00:24:51.512 "seek_hole": false, 00:24:51.512 "seek_data": false, 00:24:51.512 "copy": true, 00:24:51.512 "nvme_iov_md": false 00:24:51.512 }, 00:24:51.512 "memory_domains": [ 00:24:51.512 { 00:24:51.512 "dma_device_id": "system", 00:24:51.512 "dma_device_type": 1 00:24:51.512 }, 00:24:51.512 { 00:24:51.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:51.512 "dma_device_type": 2 00:24:51.512 } 00:24:51.512 ], 00:24:51.512 "driver_specific": {} 00:24:51.512 } 00:24:51.512 ] 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:51.512 "name": "Existed_Raid", 00:24:51.512 "uuid": "c845ea85-2844-4002-8ab4-ff0e1da18e08", 00:24:51.512 "strip_size_kb": 64, 00:24:51.512 "state": "online", 00:24:51.512 "raid_level": "concat", 00:24:51.512 "superblock": false, 00:24:51.512 "num_base_bdevs": 4, 00:24:51.512 "num_base_bdevs_discovered": 4, 00:24:51.512 "num_base_bdevs_operational": 4, 00:24:51.512 "base_bdevs_list": [ 00:24:51.512 { 00:24:51.512 "name": "BaseBdev1", 00:24:51.512 "uuid": "13ad1ef5-b422-4fc1-bba1-cd2346b97dfd", 00:24:51.512 "is_configured": true, 00:24:51.512 "data_offset": 0, 00:24:51.512 "data_size": 65536 00:24:51.512 }, 00:24:51.512 { 00:24:51.512 "name": "BaseBdev2", 00:24:51.512 "uuid": "4f9b38e0-43da-4f17-b313-01c3dd3ab0fe", 00:24:51.512 "is_configured": true, 00:24:51.512 "data_offset": 0, 00:24:51.512 "data_size": 65536 00:24:51.512 }, 00:24:51.512 { 00:24:51.512 "name": "BaseBdev3", 00:24:51.512 "uuid": "e6200769-c344-4dbd-a8ed-c9fc6d41866c", 00:24:51.512 "is_configured": true, 00:24:51.512 "data_offset": 0, 00:24:51.512 "data_size": 65536 00:24:51.512 }, 00:24:51.512 { 00:24:51.512 "name": "BaseBdev4", 00:24:51.512 "uuid": "01f081b8-e676-45df-8f0a-1eea8370d941", 00:24:51.512 "is_configured": true, 00:24:51.512 "data_offset": 0, 00:24:51.512 "data_size": 65536 00:24:51.512 } 00:24:51.512 ] 00:24:51.512 }' 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:51.512 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.080 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:52.080 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:52.080 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:52.080 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:52.080 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:52.080 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:52.080 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:52.080 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:52.080 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.080 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.080 [2024-11-20 13:44:54.772609] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:52.080 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.080 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:52.080 "name": "Existed_Raid", 00:24:52.080 "aliases": [ 00:24:52.080 "c845ea85-2844-4002-8ab4-ff0e1da18e08" 00:24:52.080 ], 00:24:52.080 "product_name": "Raid Volume", 00:24:52.080 "block_size": 512, 00:24:52.080 "num_blocks": 262144, 00:24:52.080 "uuid": "c845ea85-2844-4002-8ab4-ff0e1da18e08", 00:24:52.080 "assigned_rate_limits": { 00:24:52.080 "rw_ios_per_sec": 0, 00:24:52.080 "rw_mbytes_per_sec": 0, 00:24:52.080 "r_mbytes_per_sec": 0, 00:24:52.080 "w_mbytes_per_sec": 0 00:24:52.080 }, 00:24:52.080 "claimed": false, 00:24:52.080 "zoned": false, 00:24:52.080 "supported_io_types": { 00:24:52.080 "read": true, 00:24:52.080 "write": true, 00:24:52.080 "unmap": true, 00:24:52.080 "flush": true, 00:24:52.080 "reset": true, 00:24:52.080 "nvme_admin": false, 00:24:52.080 "nvme_io": false, 00:24:52.080 "nvme_io_md": false, 00:24:52.080 "write_zeroes": true, 00:24:52.080 "zcopy": false, 00:24:52.080 "get_zone_info": false, 00:24:52.080 "zone_management": false, 00:24:52.080 "zone_append": false, 00:24:52.080 "compare": false, 00:24:52.080 "compare_and_write": false, 00:24:52.080 "abort": false, 00:24:52.080 "seek_hole": false, 00:24:52.080 "seek_data": false, 00:24:52.080 "copy": false, 00:24:52.080 "nvme_iov_md": false 00:24:52.080 }, 00:24:52.080 "memory_domains": [ 00:24:52.080 { 00:24:52.080 "dma_device_id": "system", 00:24:52.080 "dma_device_type": 1 00:24:52.080 }, 00:24:52.080 { 00:24:52.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:52.080 "dma_device_type": 2 00:24:52.080 }, 00:24:52.080 { 00:24:52.080 "dma_device_id": "system", 00:24:52.080 "dma_device_type": 1 00:24:52.080 }, 00:24:52.080 { 00:24:52.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:52.080 "dma_device_type": 2 00:24:52.080 }, 00:24:52.080 { 00:24:52.081 "dma_device_id": "system", 00:24:52.081 "dma_device_type": 1 00:24:52.081 }, 00:24:52.081 { 00:24:52.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:52.081 "dma_device_type": 2 00:24:52.081 }, 00:24:52.081 { 00:24:52.081 "dma_device_id": "system", 00:24:52.081 "dma_device_type": 1 00:24:52.081 }, 00:24:52.081 { 00:24:52.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:52.081 "dma_device_type": 2 00:24:52.081 } 00:24:52.081 ], 00:24:52.081 "driver_specific": { 00:24:52.081 "raid": { 00:24:52.081 "uuid": "c845ea85-2844-4002-8ab4-ff0e1da18e08", 00:24:52.081 "strip_size_kb": 64, 00:24:52.081 "state": "online", 00:24:52.081 "raid_level": "concat", 00:24:52.081 "superblock": false, 00:24:52.081 "num_base_bdevs": 4, 00:24:52.081 "num_base_bdevs_discovered": 4, 00:24:52.081 "num_base_bdevs_operational": 4, 00:24:52.081 "base_bdevs_list": [ 00:24:52.081 { 00:24:52.081 "name": "BaseBdev1", 00:24:52.081 "uuid": "13ad1ef5-b422-4fc1-bba1-cd2346b97dfd", 00:24:52.081 "is_configured": true, 00:24:52.081 "data_offset": 0, 00:24:52.081 "data_size": 65536 00:24:52.081 }, 00:24:52.081 { 00:24:52.081 "name": "BaseBdev2", 00:24:52.081 "uuid": "4f9b38e0-43da-4f17-b313-01c3dd3ab0fe", 00:24:52.081 "is_configured": true, 00:24:52.081 "data_offset": 0, 00:24:52.081 "data_size": 65536 00:24:52.081 }, 00:24:52.081 { 00:24:52.081 "name": "BaseBdev3", 00:24:52.081 "uuid": "e6200769-c344-4dbd-a8ed-c9fc6d41866c", 00:24:52.081 "is_configured": true, 00:24:52.081 "data_offset": 0, 00:24:52.081 "data_size": 65536 00:24:52.081 }, 00:24:52.081 { 00:24:52.081 "name": "BaseBdev4", 00:24:52.081 "uuid": "01f081b8-e676-45df-8f0a-1eea8370d941", 00:24:52.081 "is_configured": true, 00:24:52.081 "data_offset": 0, 00:24:52.081 "data_size": 65536 00:24:52.081 } 00:24:52.081 ] 00:24:52.081 } 00:24:52.081 } 00:24:52.081 }' 00:24:52.081 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:52.081 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:52.081 BaseBdev2 00:24:52.081 BaseBdev3 00:24:52.081 BaseBdev4' 00:24:52.081 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:52.081 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:52.081 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:52.081 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:52.081 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.081 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.081 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:52.081 13:44:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.340 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:52.340 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:52.340 13:44:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.340 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.340 [2024-11-20 13:44:55.172329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:52.340 [2024-11-20 13:44:55.172367] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:52.340 [2024-11-20 13:44:55.172429] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:52.600 "name": "Existed_Raid", 00:24:52.600 "uuid": "c845ea85-2844-4002-8ab4-ff0e1da18e08", 00:24:52.600 "strip_size_kb": 64, 00:24:52.600 "state": "offline", 00:24:52.600 "raid_level": "concat", 00:24:52.600 "superblock": false, 00:24:52.600 "num_base_bdevs": 4, 00:24:52.600 "num_base_bdevs_discovered": 3, 00:24:52.600 "num_base_bdevs_operational": 3, 00:24:52.600 "base_bdevs_list": [ 00:24:52.600 { 00:24:52.600 "name": null, 00:24:52.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.600 "is_configured": false, 00:24:52.600 "data_offset": 0, 00:24:52.600 "data_size": 65536 00:24:52.600 }, 00:24:52.600 { 00:24:52.600 "name": "BaseBdev2", 00:24:52.600 "uuid": "4f9b38e0-43da-4f17-b313-01c3dd3ab0fe", 00:24:52.600 "is_configured": true, 00:24:52.600 "data_offset": 0, 00:24:52.600 "data_size": 65536 00:24:52.600 }, 00:24:52.600 { 00:24:52.600 "name": "BaseBdev3", 00:24:52.600 "uuid": "e6200769-c344-4dbd-a8ed-c9fc6d41866c", 00:24:52.600 "is_configured": true, 00:24:52.600 "data_offset": 0, 00:24:52.600 "data_size": 65536 00:24:52.600 }, 00:24:52.600 { 00:24:52.600 "name": "BaseBdev4", 00:24:52.600 "uuid": "01f081b8-e676-45df-8f0a-1eea8370d941", 00:24:52.600 "is_configured": true, 00:24:52.600 "data_offset": 0, 00:24:52.600 "data_size": 65536 00:24:52.600 } 00:24:52.600 ] 00:24:52.600 }' 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:52.600 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.861 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:52.861 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:53.120 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.120 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.120 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.120 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:53.120 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.120 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:53.120 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:53.120 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:53.120 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.120 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.120 [2024-11-20 13:44:55.834132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:53.120 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.120 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:53.120 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:53.120 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.120 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:53.120 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.120 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.120 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.120 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:53.120 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:53.120 13:44:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:24:53.120 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.120 13:44:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.120 [2024-11-20 13:44:55.991645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:53.381 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.381 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:53.381 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:53.381 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.381 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:53.381 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.381 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.381 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.381 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:53.381 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:53.381 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:24:53.381 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.381 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.381 [2024-11-20 13:44:56.147552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:53.381 [2024-11-20 13:44:56.147621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:24:53.381 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.381 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:53.381 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:53.381 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:53.381 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.381 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.381 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.381 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.640 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:53.640 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:53.640 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:24:53.640 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:24:53.640 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:53.640 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:24:53.640 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.640 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.640 BaseBdev2 00:24:53.640 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.640 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:24:53.640 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:24:53.640 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:53.640 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:53.640 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:53.640 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:53.640 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:53.640 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.640 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.640 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.640 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:53.640 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.641 [ 00:24:53.641 { 00:24:53.641 "name": "BaseBdev2", 00:24:53.641 "aliases": [ 00:24:53.641 "b8a1a1e8-691e-4de6-9f8f-13cf1c4ea35d" 00:24:53.641 ], 00:24:53.641 "product_name": "Malloc disk", 00:24:53.641 "block_size": 512, 00:24:53.641 "num_blocks": 65536, 00:24:53.641 "uuid": "b8a1a1e8-691e-4de6-9f8f-13cf1c4ea35d", 00:24:53.641 "assigned_rate_limits": { 00:24:53.641 "rw_ios_per_sec": 0, 00:24:53.641 "rw_mbytes_per_sec": 0, 00:24:53.641 "r_mbytes_per_sec": 0, 00:24:53.641 "w_mbytes_per_sec": 0 00:24:53.641 }, 00:24:53.641 "claimed": false, 00:24:53.641 "zoned": false, 00:24:53.641 "supported_io_types": { 00:24:53.641 "read": true, 00:24:53.641 "write": true, 00:24:53.641 "unmap": true, 00:24:53.641 "flush": true, 00:24:53.641 "reset": true, 00:24:53.641 "nvme_admin": false, 00:24:53.641 "nvme_io": false, 00:24:53.641 "nvme_io_md": false, 00:24:53.641 "write_zeroes": true, 00:24:53.641 "zcopy": true, 00:24:53.641 "get_zone_info": false, 00:24:53.641 "zone_management": false, 00:24:53.641 "zone_append": false, 00:24:53.641 "compare": false, 00:24:53.641 "compare_and_write": false, 00:24:53.641 "abort": true, 00:24:53.641 "seek_hole": false, 00:24:53.641 "seek_data": false, 00:24:53.641 "copy": true, 00:24:53.641 "nvme_iov_md": false 00:24:53.641 }, 00:24:53.641 "memory_domains": [ 00:24:53.641 { 00:24:53.641 "dma_device_id": "system", 00:24:53.641 "dma_device_type": 1 00:24:53.641 }, 00:24:53.641 { 00:24:53.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:53.641 "dma_device_type": 2 00:24:53.641 } 00:24:53.641 ], 00:24:53.641 "driver_specific": {} 00:24:53.641 } 00:24:53.641 ] 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.641 BaseBdev3 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.641 [ 00:24:53.641 { 00:24:53.641 "name": "BaseBdev3", 00:24:53.641 "aliases": [ 00:24:53.641 "da89348c-2b1f-4493-846d-b5cf034bf5ae" 00:24:53.641 ], 00:24:53.641 "product_name": "Malloc disk", 00:24:53.641 "block_size": 512, 00:24:53.641 "num_blocks": 65536, 00:24:53.641 "uuid": "da89348c-2b1f-4493-846d-b5cf034bf5ae", 00:24:53.641 "assigned_rate_limits": { 00:24:53.641 "rw_ios_per_sec": 0, 00:24:53.641 "rw_mbytes_per_sec": 0, 00:24:53.641 "r_mbytes_per_sec": 0, 00:24:53.641 "w_mbytes_per_sec": 0 00:24:53.641 }, 00:24:53.641 "claimed": false, 00:24:53.641 "zoned": false, 00:24:53.641 "supported_io_types": { 00:24:53.641 "read": true, 00:24:53.641 "write": true, 00:24:53.641 "unmap": true, 00:24:53.641 "flush": true, 00:24:53.641 "reset": true, 00:24:53.641 "nvme_admin": false, 00:24:53.641 "nvme_io": false, 00:24:53.641 "nvme_io_md": false, 00:24:53.641 "write_zeroes": true, 00:24:53.641 "zcopy": true, 00:24:53.641 "get_zone_info": false, 00:24:53.641 "zone_management": false, 00:24:53.641 "zone_append": false, 00:24:53.641 "compare": false, 00:24:53.641 "compare_and_write": false, 00:24:53.641 "abort": true, 00:24:53.641 "seek_hole": false, 00:24:53.641 "seek_data": false, 00:24:53.641 "copy": true, 00:24:53.641 "nvme_iov_md": false 00:24:53.641 }, 00:24:53.641 "memory_domains": [ 00:24:53.641 { 00:24:53.641 "dma_device_id": "system", 00:24:53.641 "dma_device_type": 1 00:24:53.641 }, 00:24:53.641 { 00:24:53.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:53.641 "dma_device_type": 2 00:24:53.641 } 00:24:53.641 ], 00:24:53.641 "driver_specific": {} 00:24:53.641 } 00:24:53.641 ] 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.641 BaseBdev4 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:53.641 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.642 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.642 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.642 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:53.642 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.642 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.642 [ 00:24:53.642 { 00:24:53.642 "name": "BaseBdev4", 00:24:53.642 "aliases": [ 00:24:53.642 "6a2d12a5-06dd-46cf-a84c-5691f37094eb" 00:24:53.642 ], 00:24:53.642 "product_name": "Malloc disk", 00:24:53.642 "block_size": 512, 00:24:53.642 "num_blocks": 65536, 00:24:53.642 "uuid": "6a2d12a5-06dd-46cf-a84c-5691f37094eb", 00:24:53.642 "assigned_rate_limits": { 00:24:53.642 "rw_ios_per_sec": 0, 00:24:53.642 "rw_mbytes_per_sec": 0, 00:24:53.642 "r_mbytes_per_sec": 0, 00:24:53.642 "w_mbytes_per_sec": 0 00:24:53.642 }, 00:24:53.642 "claimed": false, 00:24:53.642 "zoned": false, 00:24:53.642 "supported_io_types": { 00:24:53.642 "read": true, 00:24:53.642 "write": true, 00:24:53.642 "unmap": true, 00:24:53.642 "flush": true, 00:24:53.642 "reset": true, 00:24:53.642 "nvme_admin": false, 00:24:53.642 "nvme_io": false, 00:24:53.642 "nvme_io_md": false, 00:24:53.642 "write_zeroes": true, 00:24:53.642 "zcopy": true, 00:24:53.642 "get_zone_info": false, 00:24:53.642 "zone_management": false, 00:24:53.642 "zone_append": false, 00:24:53.642 "compare": false, 00:24:53.642 "compare_and_write": false, 00:24:53.642 "abort": true, 00:24:53.642 "seek_hole": false, 00:24:53.642 "seek_data": false, 00:24:53.642 "copy": true, 00:24:53.642 "nvme_iov_md": false 00:24:53.642 }, 00:24:53.642 "memory_domains": [ 00:24:53.642 { 00:24:53.642 "dma_device_id": "system", 00:24:53.642 "dma_device_type": 1 00:24:53.642 }, 00:24:53.642 { 00:24:53.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:53.642 "dma_device_type": 2 00:24:53.642 } 00:24:53.642 ], 00:24:53.642 "driver_specific": {} 00:24:53.642 } 00:24:53.642 ] 00:24:53.642 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.642 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:53.642 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:24:53.642 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:24:53.642 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:24:53.642 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.642 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.642 [2024-11-20 13:44:56.550296] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:53.642 [2024-11-20 13:44:56.550527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:53.642 [2024-11-20 13:44:56.550677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:53.642 [2024-11-20 13:44:56.553508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:53.642 [2024-11-20 13:44:56.553710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:53.901 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.901 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:53.901 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:53.901 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:53.901 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:53.901 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:53.901 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:53.901 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:53.901 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:53.901 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:53.901 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:53.901 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:53.901 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.901 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.901 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.901 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.901 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:53.902 "name": "Existed_Raid", 00:24:53.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.902 "strip_size_kb": 64, 00:24:53.902 "state": "configuring", 00:24:53.902 "raid_level": "concat", 00:24:53.902 "superblock": false, 00:24:53.902 "num_base_bdevs": 4, 00:24:53.902 "num_base_bdevs_discovered": 3, 00:24:53.902 "num_base_bdevs_operational": 4, 00:24:53.902 "base_bdevs_list": [ 00:24:53.902 { 00:24:53.902 "name": "BaseBdev1", 00:24:53.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.902 "is_configured": false, 00:24:53.902 "data_offset": 0, 00:24:53.902 "data_size": 0 00:24:53.902 }, 00:24:53.902 { 00:24:53.902 "name": "BaseBdev2", 00:24:53.902 "uuid": "b8a1a1e8-691e-4de6-9f8f-13cf1c4ea35d", 00:24:53.902 "is_configured": true, 00:24:53.902 "data_offset": 0, 00:24:53.902 "data_size": 65536 00:24:53.902 }, 00:24:53.902 { 00:24:53.902 "name": "BaseBdev3", 00:24:53.902 "uuid": "da89348c-2b1f-4493-846d-b5cf034bf5ae", 00:24:53.902 "is_configured": true, 00:24:53.902 "data_offset": 0, 00:24:53.902 "data_size": 65536 00:24:53.902 }, 00:24:53.902 { 00:24:53.902 "name": "BaseBdev4", 00:24:53.902 "uuid": "6a2d12a5-06dd-46cf-a84c-5691f37094eb", 00:24:53.902 "is_configured": true, 00:24:53.902 "data_offset": 0, 00:24:53.902 "data_size": 65536 00:24:53.902 } 00:24:53.902 ] 00:24:53.902 }' 00:24:53.902 13:44:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:53.902 13:44:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.470 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:24:54.470 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.470 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.470 [2024-11-20 13:44:57.106470] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:54.470 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.470 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:54.470 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:54.470 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:54.470 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:54.470 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:54.470 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:54.470 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:54.470 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:54.470 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:54.470 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:54.470 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.470 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.470 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.470 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:54.470 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.470 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:54.470 "name": "Existed_Raid", 00:24:54.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.470 "strip_size_kb": 64, 00:24:54.470 "state": "configuring", 00:24:54.470 "raid_level": "concat", 00:24:54.470 "superblock": false, 00:24:54.470 "num_base_bdevs": 4, 00:24:54.470 "num_base_bdevs_discovered": 2, 00:24:54.470 "num_base_bdevs_operational": 4, 00:24:54.470 "base_bdevs_list": [ 00:24:54.470 { 00:24:54.470 "name": "BaseBdev1", 00:24:54.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.470 "is_configured": false, 00:24:54.470 "data_offset": 0, 00:24:54.470 "data_size": 0 00:24:54.470 }, 00:24:54.470 { 00:24:54.470 "name": null, 00:24:54.470 "uuid": "b8a1a1e8-691e-4de6-9f8f-13cf1c4ea35d", 00:24:54.470 "is_configured": false, 00:24:54.470 "data_offset": 0, 00:24:54.470 "data_size": 65536 00:24:54.470 }, 00:24:54.470 { 00:24:54.470 "name": "BaseBdev3", 00:24:54.470 "uuid": "da89348c-2b1f-4493-846d-b5cf034bf5ae", 00:24:54.470 "is_configured": true, 00:24:54.470 "data_offset": 0, 00:24:54.470 "data_size": 65536 00:24:54.470 }, 00:24:54.470 { 00:24:54.470 "name": "BaseBdev4", 00:24:54.470 "uuid": "6a2d12a5-06dd-46cf-a84c-5691f37094eb", 00:24:54.470 "is_configured": true, 00:24:54.470 "data_offset": 0, 00:24:54.470 "data_size": 65536 00:24:54.470 } 00:24:54.470 ] 00:24:54.470 }' 00:24:54.470 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:54.470 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.729 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.729 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:54.729 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.729 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.988 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.988 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:24:54.988 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:24:54.988 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.988 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.988 [2024-11-20 13:44:57.727293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:54.988 BaseBdev1 00:24:54.988 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.988 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:24:54.988 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:24:54.988 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:54.988 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:54.988 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:54.988 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:54.988 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:54.988 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.988 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.988 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.988 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:54.988 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.988 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.988 [ 00:24:54.988 { 00:24:54.988 "name": "BaseBdev1", 00:24:54.988 "aliases": [ 00:24:54.988 "b18752ba-25dc-4d78-ba05-b6a7f1e2612f" 00:24:54.988 ], 00:24:54.988 "product_name": "Malloc disk", 00:24:54.988 "block_size": 512, 00:24:54.988 "num_blocks": 65536, 00:24:54.988 "uuid": "b18752ba-25dc-4d78-ba05-b6a7f1e2612f", 00:24:54.988 "assigned_rate_limits": { 00:24:54.988 "rw_ios_per_sec": 0, 00:24:54.988 "rw_mbytes_per_sec": 0, 00:24:54.988 "r_mbytes_per_sec": 0, 00:24:54.988 "w_mbytes_per_sec": 0 00:24:54.988 }, 00:24:54.988 "claimed": true, 00:24:54.988 "claim_type": "exclusive_write", 00:24:54.988 "zoned": false, 00:24:54.988 "supported_io_types": { 00:24:54.988 "read": true, 00:24:54.988 "write": true, 00:24:54.988 "unmap": true, 00:24:54.988 "flush": true, 00:24:54.988 "reset": true, 00:24:54.988 "nvme_admin": false, 00:24:54.988 "nvme_io": false, 00:24:54.988 "nvme_io_md": false, 00:24:54.988 "write_zeroes": true, 00:24:54.988 "zcopy": true, 00:24:54.988 "get_zone_info": false, 00:24:54.988 "zone_management": false, 00:24:54.988 "zone_append": false, 00:24:54.988 "compare": false, 00:24:54.988 "compare_and_write": false, 00:24:54.988 "abort": true, 00:24:54.988 "seek_hole": false, 00:24:54.988 "seek_data": false, 00:24:54.989 "copy": true, 00:24:54.989 "nvme_iov_md": false 00:24:54.989 }, 00:24:54.989 "memory_domains": [ 00:24:54.989 { 00:24:54.989 "dma_device_id": "system", 00:24:54.989 "dma_device_type": 1 00:24:54.989 }, 00:24:54.989 { 00:24:54.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:54.989 "dma_device_type": 2 00:24:54.989 } 00:24:54.989 ], 00:24:54.989 "driver_specific": {} 00:24:54.989 } 00:24:54.989 ] 00:24:54.989 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.989 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:54.989 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:54.989 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:54.989 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:54.989 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:54.989 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:54.989 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:54.989 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:54.989 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:54.989 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:54.989 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:54.989 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:54.989 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.989 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.989 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:54.989 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.989 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:54.989 "name": "Existed_Raid", 00:24:54.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.989 "strip_size_kb": 64, 00:24:54.989 "state": "configuring", 00:24:54.989 "raid_level": "concat", 00:24:54.989 "superblock": false, 00:24:54.989 "num_base_bdevs": 4, 00:24:54.989 "num_base_bdevs_discovered": 3, 00:24:54.989 "num_base_bdevs_operational": 4, 00:24:54.989 "base_bdevs_list": [ 00:24:54.989 { 00:24:54.989 "name": "BaseBdev1", 00:24:54.989 "uuid": "b18752ba-25dc-4d78-ba05-b6a7f1e2612f", 00:24:54.989 "is_configured": true, 00:24:54.989 "data_offset": 0, 00:24:54.989 "data_size": 65536 00:24:54.989 }, 00:24:54.989 { 00:24:54.989 "name": null, 00:24:54.989 "uuid": "b8a1a1e8-691e-4de6-9f8f-13cf1c4ea35d", 00:24:54.989 "is_configured": false, 00:24:54.989 "data_offset": 0, 00:24:54.989 "data_size": 65536 00:24:54.989 }, 00:24:54.989 { 00:24:54.989 "name": "BaseBdev3", 00:24:54.989 "uuid": "da89348c-2b1f-4493-846d-b5cf034bf5ae", 00:24:54.989 "is_configured": true, 00:24:54.989 "data_offset": 0, 00:24:54.989 "data_size": 65536 00:24:54.989 }, 00:24:54.989 { 00:24:54.989 "name": "BaseBdev4", 00:24:54.989 "uuid": "6a2d12a5-06dd-46cf-a84c-5691f37094eb", 00:24:54.989 "is_configured": true, 00:24:54.989 "data_offset": 0, 00:24:54.989 "data_size": 65536 00:24:54.989 } 00:24:54.989 ] 00:24:54.989 }' 00:24:54.989 13:44:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:54.989 13:44:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.597 [2024-11-20 13:44:58.379594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:55.597 "name": "Existed_Raid", 00:24:55.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.597 "strip_size_kb": 64, 00:24:55.597 "state": "configuring", 00:24:55.597 "raid_level": "concat", 00:24:55.597 "superblock": false, 00:24:55.597 "num_base_bdevs": 4, 00:24:55.597 "num_base_bdevs_discovered": 2, 00:24:55.597 "num_base_bdevs_operational": 4, 00:24:55.597 "base_bdevs_list": [ 00:24:55.597 { 00:24:55.597 "name": "BaseBdev1", 00:24:55.597 "uuid": "b18752ba-25dc-4d78-ba05-b6a7f1e2612f", 00:24:55.597 "is_configured": true, 00:24:55.597 "data_offset": 0, 00:24:55.597 "data_size": 65536 00:24:55.597 }, 00:24:55.597 { 00:24:55.597 "name": null, 00:24:55.597 "uuid": "b8a1a1e8-691e-4de6-9f8f-13cf1c4ea35d", 00:24:55.597 "is_configured": false, 00:24:55.597 "data_offset": 0, 00:24:55.597 "data_size": 65536 00:24:55.597 }, 00:24:55.597 { 00:24:55.597 "name": null, 00:24:55.597 "uuid": "da89348c-2b1f-4493-846d-b5cf034bf5ae", 00:24:55.597 "is_configured": false, 00:24:55.597 "data_offset": 0, 00:24:55.597 "data_size": 65536 00:24:55.597 }, 00:24:55.597 { 00:24:55.597 "name": "BaseBdev4", 00:24:55.597 "uuid": "6a2d12a5-06dd-46cf-a84c-5691f37094eb", 00:24:55.597 "is_configured": true, 00:24:55.597 "data_offset": 0, 00:24:55.597 "data_size": 65536 00:24:55.597 } 00:24:55.597 ] 00:24:55.597 }' 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:55.597 13:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.164 [2024-11-20 13:44:58.963738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:56.164 13:44:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.164 13:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:56.164 "name": "Existed_Raid", 00:24:56.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.164 "strip_size_kb": 64, 00:24:56.164 "state": "configuring", 00:24:56.164 "raid_level": "concat", 00:24:56.164 "superblock": false, 00:24:56.164 "num_base_bdevs": 4, 00:24:56.164 "num_base_bdevs_discovered": 3, 00:24:56.164 "num_base_bdevs_operational": 4, 00:24:56.164 "base_bdevs_list": [ 00:24:56.164 { 00:24:56.164 "name": "BaseBdev1", 00:24:56.164 "uuid": "b18752ba-25dc-4d78-ba05-b6a7f1e2612f", 00:24:56.164 "is_configured": true, 00:24:56.164 "data_offset": 0, 00:24:56.164 "data_size": 65536 00:24:56.164 }, 00:24:56.164 { 00:24:56.164 "name": null, 00:24:56.164 "uuid": "b8a1a1e8-691e-4de6-9f8f-13cf1c4ea35d", 00:24:56.164 "is_configured": false, 00:24:56.164 "data_offset": 0, 00:24:56.164 "data_size": 65536 00:24:56.164 }, 00:24:56.164 { 00:24:56.164 "name": "BaseBdev3", 00:24:56.164 "uuid": "da89348c-2b1f-4493-846d-b5cf034bf5ae", 00:24:56.164 "is_configured": true, 00:24:56.164 "data_offset": 0, 00:24:56.164 "data_size": 65536 00:24:56.164 }, 00:24:56.164 { 00:24:56.164 "name": "BaseBdev4", 00:24:56.164 "uuid": "6a2d12a5-06dd-46cf-a84c-5691f37094eb", 00:24:56.164 "is_configured": true, 00:24:56.164 "data_offset": 0, 00:24:56.164 "data_size": 65536 00:24:56.164 } 00:24:56.164 ] 00:24:56.164 }' 00:24:56.164 13:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:56.164 13:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.733 [2024-11-20 13:44:59.520023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:56.733 13:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.991 13:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:56.992 "name": "Existed_Raid", 00:24:56.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.992 "strip_size_kb": 64, 00:24:56.992 "state": "configuring", 00:24:56.992 "raid_level": "concat", 00:24:56.992 "superblock": false, 00:24:56.992 "num_base_bdevs": 4, 00:24:56.992 "num_base_bdevs_discovered": 2, 00:24:56.992 "num_base_bdevs_operational": 4, 00:24:56.992 "base_bdevs_list": [ 00:24:56.992 { 00:24:56.992 "name": null, 00:24:56.992 "uuid": "b18752ba-25dc-4d78-ba05-b6a7f1e2612f", 00:24:56.992 "is_configured": false, 00:24:56.992 "data_offset": 0, 00:24:56.992 "data_size": 65536 00:24:56.992 }, 00:24:56.992 { 00:24:56.992 "name": null, 00:24:56.992 "uuid": "b8a1a1e8-691e-4de6-9f8f-13cf1c4ea35d", 00:24:56.992 "is_configured": false, 00:24:56.992 "data_offset": 0, 00:24:56.992 "data_size": 65536 00:24:56.992 }, 00:24:56.992 { 00:24:56.992 "name": "BaseBdev3", 00:24:56.992 "uuid": "da89348c-2b1f-4493-846d-b5cf034bf5ae", 00:24:56.992 "is_configured": true, 00:24:56.992 "data_offset": 0, 00:24:56.992 "data_size": 65536 00:24:56.992 }, 00:24:56.992 { 00:24:56.992 "name": "BaseBdev4", 00:24:56.992 "uuid": "6a2d12a5-06dd-46cf-a84c-5691f37094eb", 00:24:56.992 "is_configured": true, 00:24:56.992 "data_offset": 0, 00:24:56.992 "data_size": 65536 00:24:56.992 } 00:24:56.992 ] 00:24:56.992 }' 00:24:56.992 13:44:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:56.992 13:44:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:57.250 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:57.250 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.250 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:57.250 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:57.509 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.509 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:24:57.509 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:57.509 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.509 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:57.509 [2024-11-20 13:45:00.206662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:57.509 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.509 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:57.509 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:57.509 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:57.509 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:57.509 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:57.509 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:57.509 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:57.509 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:57.509 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:57.509 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:57.509 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:57.509 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.510 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:57.510 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:57.510 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.510 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:57.510 "name": "Existed_Raid", 00:24:57.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.510 "strip_size_kb": 64, 00:24:57.510 "state": "configuring", 00:24:57.510 "raid_level": "concat", 00:24:57.510 "superblock": false, 00:24:57.510 "num_base_bdevs": 4, 00:24:57.510 "num_base_bdevs_discovered": 3, 00:24:57.510 "num_base_bdevs_operational": 4, 00:24:57.510 "base_bdevs_list": [ 00:24:57.510 { 00:24:57.510 "name": null, 00:24:57.510 "uuid": "b18752ba-25dc-4d78-ba05-b6a7f1e2612f", 00:24:57.510 "is_configured": false, 00:24:57.510 "data_offset": 0, 00:24:57.510 "data_size": 65536 00:24:57.510 }, 00:24:57.510 { 00:24:57.510 "name": "BaseBdev2", 00:24:57.510 "uuid": "b8a1a1e8-691e-4de6-9f8f-13cf1c4ea35d", 00:24:57.510 "is_configured": true, 00:24:57.510 "data_offset": 0, 00:24:57.510 "data_size": 65536 00:24:57.510 }, 00:24:57.510 { 00:24:57.510 "name": "BaseBdev3", 00:24:57.510 "uuid": "da89348c-2b1f-4493-846d-b5cf034bf5ae", 00:24:57.510 "is_configured": true, 00:24:57.510 "data_offset": 0, 00:24:57.510 "data_size": 65536 00:24:57.510 }, 00:24:57.510 { 00:24:57.510 "name": "BaseBdev4", 00:24:57.510 "uuid": "6a2d12a5-06dd-46cf-a84c-5691f37094eb", 00:24:57.510 "is_configured": true, 00:24:57.510 "data_offset": 0, 00:24:57.510 "data_size": 65536 00:24:57.510 } 00:24:57.510 ] 00:24:57.510 }' 00:24:57.510 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:57.510 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b18752ba-25dc-4d78-ba05-b6a7f1e2612f 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.080 [2024-11-20 13:45:00.902241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:58.080 [2024-11-20 13:45:00.902343] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:58.080 [2024-11-20 13:45:00.902355] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:24:58.080 [2024-11-20 13:45:00.902704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:24:58.080 [2024-11-20 13:45:00.902911] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:58.080 [2024-11-20 13:45:00.902940] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:24:58.080 [2024-11-20 13:45:00.903253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:58.080 NewBaseBdev 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.080 [ 00:24:58.080 { 00:24:58.080 "name": "NewBaseBdev", 00:24:58.080 "aliases": [ 00:24:58.080 "b18752ba-25dc-4d78-ba05-b6a7f1e2612f" 00:24:58.080 ], 00:24:58.080 "product_name": "Malloc disk", 00:24:58.080 "block_size": 512, 00:24:58.080 "num_blocks": 65536, 00:24:58.080 "uuid": "b18752ba-25dc-4d78-ba05-b6a7f1e2612f", 00:24:58.080 "assigned_rate_limits": { 00:24:58.080 "rw_ios_per_sec": 0, 00:24:58.080 "rw_mbytes_per_sec": 0, 00:24:58.080 "r_mbytes_per_sec": 0, 00:24:58.080 "w_mbytes_per_sec": 0 00:24:58.080 }, 00:24:58.080 "claimed": true, 00:24:58.080 "claim_type": "exclusive_write", 00:24:58.080 "zoned": false, 00:24:58.080 "supported_io_types": { 00:24:58.080 "read": true, 00:24:58.080 "write": true, 00:24:58.080 "unmap": true, 00:24:58.080 "flush": true, 00:24:58.080 "reset": true, 00:24:58.080 "nvme_admin": false, 00:24:58.080 "nvme_io": false, 00:24:58.080 "nvme_io_md": false, 00:24:58.080 "write_zeroes": true, 00:24:58.080 "zcopy": true, 00:24:58.080 "get_zone_info": false, 00:24:58.080 "zone_management": false, 00:24:58.080 "zone_append": false, 00:24:58.080 "compare": false, 00:24:58.080 "compare_and_write": false, 00:24:58.080 "abort": true, 00:24:58.080 "seek_hole": false, 00:24:58.080 "seek_data": false, 00:24:58.080 "copy": true, 00:24:58.080 "nvme_iov_md": false 00:24:58.080 }, 00:24:58.080 "memory_domains": [ 00:24:58.080 { 00:24:58.080 "dma_device_id": "system", 00:24:58.080 "dma_device_type": 1 00:24:58.080 }, 00:24:58.080 { 00:24:58.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:58.080 "dma_device_type": 2 00:24:58.080 } 00:24:58.080 ], 00:24:58.080 "driver_specific": {} 00:24:58.080 } 00:24:58.080 ] 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:24:58.080 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:58.081 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:58.081 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:24:58.081 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:24:58.081 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:24:58.081 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:58.081 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:58.081 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:58.081 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:58.081 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:58.081 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:58.081 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.081 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.081 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.081 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:58.081 "name": "Existed_Raid", 00:24:58.081 "uuid": "cf82abe9-568c-44ef-a59b-1a93bcd97be0", 00:24:58.081 "strip_size_kb": 64, 00:24:58.081 "state": "online", 00:24:58.081 "raid_level": "concat", 00:24:58.081 "superblock": false, 00:24:58.081 "num_base_bdevs": 4, 00:24:58.081 "num_base_bdevs_discovered": 4, 00:24:58.081 "num_base_bdevs_operational": 4, 00:24:58.081 "base_bdevs_list": [ 00:24:58.081 { 00:24:58.081 "name": "NewBaseBdev", 00:24:58.081 "uuid": "b18752ba-25dc-4d78-ba05-b6a7f1e2612f", 00:24:58.081 "is_configured": true, 00:24:58.081 "data_offset": 0, 00:24:58.081 "data_size": 65536 00:24:58.081 }, 00:24:58.081 { 00:24:58.081 "name": "BaseBdev2", 00:24:58.081 "uuid": "b8a1a1e8-691e-4de6-9f8f-13cf1c4ea35d", 00:24:58.081 "is_configured": true, 00:24:58.081 "data_offset": 0, 00:24:58.081 "data_size": 65536 00:24:58.081 }, 00:24:58.081 { 00:24:58.081 "name": "BaseBdev3", 00:24:58.081 "uuid": "da89348c-2b1f-4493-846d-b5cf034bf5ae", 00:24:58.081 "is_configured": true, 00:24:58.081 "data_offset": 0, 00:24:58.081 "data_size": 65536 00:24:58.081 }, 00:24:58.081 { 00:24:58.081 "name": "BaseBdev4", 00:24:58.081 "uuid": "6a2d12a5-06dd-46cf-a84c-5691f37094eb", 00:24:58.081 "is_configured": true, 00:24:58.081 "data_offset": 0, 00:24:58.081 "data_size": 65536 00:24:58.081 } 00:24:58.081 ] 00:24:58.081 }' 00:24:58.081 13:45:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:58.081 13:45:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.650 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:24:58.650 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:58.650 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:58.650 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:58.650 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:24:58.650 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:58.650 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:58.650 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.650 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:58.650 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.650 [2024-11-20 13:45:01.454875] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:58.650 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.650 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:58.650 "name": "Existed_Raid", 00:24:58.650 "aliases": [ 00:24:58.650 "cf82abe9-568c-44ef-a59b-1a93bcd97be0" 00:24:58.650 ], 00:24:58.650 "product_name": "Raid Volume", 00:24:58.650 "block_size": 512, 00:24:58.650 "num_blocks": 262144, 00:24:58.650 "uuid": "cf82abe9-568c-44ef-a59b-1a93bcd97be0", 00:24:58.650 "assigned_rate_limits": { 00:24:58.650 "rw_ios_per_sec": 0, 00:24:58.650 "rw_mbytes_per_sec": 0, 00:24:58.650 "r_mbytes_per_sec": 0, 00:24:58.650 "w_mbytes_per_sec": 0 00:24:58.650 }, 00:24:58.650 "claimed": false, 00:24:58.650 "zoned": false, 00:24:58.650 "supported_io_types": { 00:24:58.650 "read": true, 00:24:58.650 "write": true, 00:24:58.650 "unmap": true, 00:24:58.650 "flush": true, 00:24:58.650 "reset": true, 00:24:58.650 "nvme_admin": false, 00:24:58.650 "nvme_io": false, 00:24:58.650 "nvme_io_md": false, 00:24:58.650 "write_zeroes": true, 00:24:58.650 "zcopy": false, 00:24:58.650 "get_zone_info": false, 00:24:58.650 "zone_management": false, 00:24:58.650 "zone_append": false, 00:24:58.650 "compare": false, 00:24:58.650 "compare_and_write": false, 00:24:58.650 "abort": false, 00:24:58.650 "seek_hole": false, 00:24:58.650 "seek_data": false, 00:24:58.650 "copy": false, 00:24:58.650 "nvme_iov_md": false 00:24:58.650 }, 00:24:58.650 "memory_domains": [ 00:24:58.650 { 00:24:58.650 "dma_device_id": "system", 00:24:58.650 "dma_device_type": 1 00:24:58.650 }, 00:24:58.650 { 00:24:58.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:58.650 "dma_device_type": 2 00:24:58.650 }, 00:24:58.650 { 00:24:58.650 "dma_device_id": "system", 00:24:58.650 "dma_device_type": 1 00:24:58.650 }, 00:24:58.650 { 00:24:58.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:58.650 "dma_device_type": 2 00:24:58.650 }, 00:24:58.650 { 00:24:58.650 "dma_device_id": "system", 00:24:58.650 "dma_device_type": 1 00:24:58.650 }, 00:24:58.650 { 00:24:58.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:58.650 "dma_device_type": 2 00:24:58.650 }, 00:24:58.650 { 00:24:58.650 "dma_device_id": "system", 00:24:58.650 "dma_device_type": 1 00:24:58.650 }, 00:24:58.650 { 00:24:58.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:58.650 "dma_device_type": 2 00:24:58.650 } 00:24:58.650 ], 00:24:58.650 "driver_specific": { 00:24:58.650 "raid": { 00:24:58.650 "uuid": "cf82abe9-568c-44ef-a59b-1a93bcd97be0", 00:24:58.650 "strip_size_kb": 64, 00:24:58.650 "state": "online", 00:24:58.650 "raid_level": "concat", 00:24:58.650 "superblock": false, 00:24:58.650 "num_base_bdevs": 4, 00:24:58.650 "num_base_bdevs_discovered": 4, 00:24:58.650 "num_base_bdevs_operational": 4, 00:24:58.650 "base_bdevs_list": [ 00:24:58.650 { 00:24:58.650 "name": "NewBaseBdev", 00:24:58.650 "uuid": "b18752ba-25dc-4d78-ba05-b6a7f1e2612f", 00:24:58.650 "is_configured": true, 00:24:58.650 "data_offset": 0, 00:24:58.650 "data_size": 65536 00:24:58.650 }, 00:24:58.650 { 00:24:58.650 "name": "BaseBdev2", 00:24:58.650 "uuid": "b8a1a1e8-691e-4de6-9f8f-13cf1c4ea35d", 00:24:58.650 "is_configured": true, 00:24:58.650 "data_offset": 0, 00:24:58.650 "data_size": 65536 00:24:58.650 }, 00:24:58.650 { 00:24:58.650 "name": "BaseBdev3", 00:24:58.650 "uuid": "da89348c-2b1f-4493-846d-b5cf034bf5ae", 00:24:58.650 "is_configured": true, 00:24:58.650 "data_offset": 0, 00:24:58.650 "data_size": 65536 00:24:58.650 }, 00:24:58.650 { 00:24:58.650 "name": "BaseBdev4", 00:24:58.650 "uuid": "6a2d12a5-06dd-46cf-a84c-5691f37094eb", 00:24:58.650 "is_configured": true, 00:24:58.650 "data_offset": 0, 00:24:58.650 "data_size": 65536 00:24:58.650 } 00:24:58.650 ] 00:24:58.650 } 00:24:58.650 } 00:24:58.650 }' 00:24:58.650 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:58.650 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:24:58.650 BaseBdev2 00:24:58.650 BaseBdev3 00:24:58.650 BaseBdev4' 00:24:58.650 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.910 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.169 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:24:59.169 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:24:59.169 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:59.169 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.169 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.169 [2024-11-20 13:45:01.830650] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:59.169 [2024-11-20 13:45:01.830694] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:59.169 [2024-11-20 13:45:01.830810] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:59.169 [2024-11-20 13:45:01.830926] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:59.169 [2024-11-20 13:45:01.830944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:24:59.169 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.169 13:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71580 00:24:59.169 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71580 ']' 00:24:59.169 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71580 00:24:59.169 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:24:59.169 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:59.169 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71580 00:24:59.169 killing process with pid 71580 00:24:59.169 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:59.169 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:59.169 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71580' 00:24:59.169 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71580 00:24:59.169 [2024-11-20 13:45:01.873868] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:59.169 13:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71580 00:24:59.428 [2024-11-20 13:45:02.228086] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:25:00.806 00:25:00.806 real 0m13.158s 00:25:00.806 user 0m21.764s 00:25:00.806 sys 0m1.886s 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:00.806 ************************************ 00:25:00.806 END TEST raid_state_function_test 00:25:00.806 ************************************ 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.806 13:45:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:25:00.806 13:45:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:00.806 13:45:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:00.806 13:45:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:00.806 ************************************ 00:25:00.806 START TEST raid_state_function_test_sb 00:25:00.806 ************************************ 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72269 00:25:00.806 Process raid pid: 72269 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72269' 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72269 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72269 ']' 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:00.806 13:45:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:00.806 [2024-11-20 13:45:03.464356] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:25:00.806 [2024-11-20 13:45:03.464562] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.806 [2024-11-20 13:45:03.658822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.065 [2024-11-20 13:45:03.822637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.324 [2024-11-20 13:45:04.049123] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:01.324 [2024-11-20 13:45:04.049210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:01.892 13:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:01.892 13:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:25:01.892 13:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:01.892 13:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.892 13:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:01.892 [2024-11-20 13:45:04.560888] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:01.892 [2024-11-20 13:45:04.560979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:01.892 [2024-11-20 13:45:04.560998] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:01.892 [2024-11-20 13:45:04.561015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:01.892 [2024-11-20 13:45:04.561026] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:01.892 [2024-11-20 13:45:04.561040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:01.892 [2024-11-20 13:45:04.561050] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:01.892 [2024-11-20 13:45:04.561064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:01.892 13:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.892 13:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:01.892 13:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:01.892 13:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:01.892 13:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:01.892 13:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:01.892 13:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:01.892 13:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:01.892 13:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:01.892 13:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:01.892 13:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:01.892 13:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:01.892 13:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:01.892 13:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.892 13:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:01.892 13:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.892 13:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:01.892 "name": "Existed_Raid", 00:25:01.893 "uuid": "cdd57e8d-c2c3-4f83-9784-1c7448a47c3b", 00:25:01.893 "strip_size_kb": 64, 00:25:01.893 "state": "configuring", 00:25:01.893 "raid_level": "concat", 00:25:01.893 "superblock": true, 00:25:01.893 "num_base_bdevs": 4, 00:25:01.893 "num_base_bdevs_discovered": 0, 00:25:01.893 "num_base_bdevs_operational": 4, 00:25:01.893 "base_bdevs_list": [ 00:25:01.893 { 00:25:01.893 "name": "BaseBdev1", 00:25:01.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.893 "is_configured": false, 00:25:01.893 "data_offset": 0, 00:25:01.893 "data_size": 0 00:25:01.893 }, 00:25:01.893 { 00:25:01.893 "name": "BaseBdev2", 00:25:01.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.893 "is_configured": false, 00:25:01.893 "data_offset": 0, 00:25:01.893 "data_size": 0 00:25:01.893 }, 00:25:01.893 { 00:25:01.893 "name": "BaseBdev3", 00:25:01.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.893 "is_configured": false, 00:25:01.893 "data_offset": 0, 00:25:01.893 "data_size": 0 00:25:01.893 }, 00:25:01.893 { 00:25:01.893 "name": "BaseBdev4", 00:25:01.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.893 "is_configured": false, 00:25:01.893 "data_offset": 0, 00:25:01.893 "data_size": 0 00:25:01.893 } 00:25:01.893 ] 00:25:01.893 }' 00:25:01.893 13:45:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:01.893 13:45:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:02.459 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:02.459 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.459 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:02.459 [2024-11-20 13:45:05.101045] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:02.459 [2024-11-20 13:45:05.101120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:02.459 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.459 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:02.459 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.459 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:02.459 [2024-11-20 13:45:05.112985] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:02.459 [2024-11-20 13:45:05.113063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:02.459 [2024-11-20 13:45:05.113080] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:02.459 [2024-11-20 13:45:05.113095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:02.459 [2024-11-20 13:45:05.113105] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:02.459 [2024-11-20 13:45:05.113119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:02.459 [2024-11-20 13:45:05.113128] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:02.459 [2024-11-20 13:45:05.113141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:02.459 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.459 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:02.459 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.459 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:02.459 [2024-11-20 13:45:05.160235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:02.459 BaseBdev1 00:25:02.459 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.459 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:02.459 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:02.459 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:02.459 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:02.459 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:02.459 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:02.459 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:02.459 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.459 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:02.459 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.459 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:02.460 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.460 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:02.460 [ 00:25:02.460 { 00:25:02.460 "name": "BaseBdev1", 00:25:02.460 "aliases": [ 00:25:02.460 "e0e6b712-bc71-4348-ad06-322b49291ef3" 00:25:02.460 ], 00:25:02.460 "product_name": "Malloc disk", 00:25:02.460 "block_size": 512, 00:25:02.460 "num_blocks": 65536, 00:25:02.460 "uuid": "e0e6b712-bc71-4348-ad06-322b49291ef3", 00:25:02.460 "assigned_rate_limits": { 00:25:02.460 "rw_ios_per_sec": 0, 00:25:02.460 "rw_mbytes_per_sec": 0, 00:25:02.460 "r_mbytes_per_sec": 0, 00:25:02.460 "w_mbytes_per_sec": 0 00:25:02.460 }, 00:25:02.460 "claimed": true, 00:25:02.460 "claim_type": "exclusive_write", 00:25:02.460 "zoned": false, 00:25:02.460 "supported_io_types": { 00:25:02.460 "read": true, 00:25:02.460 "write": true, 00:25:02.460 "unmap": true, 00:25:02.460 "flush": true, 00:25:02.460 "reset": true, 00:25:02.460 "nvme_admin": false, 00:25:02.460 "nvme_io": false, 00:25:02.460 "nvme_io_md": false, 00:25:02.460 "write_zeroes": true, 00:25:02.460 "zcopy": true, 00:25:02.460 "get_zone_info": false, 00:25:02.460 "zone_management": false, 00:25:02.460 "zone_append": false, 00:25:02.460 "compare": false, 00:25:02.460 "compare_and_write": false, 00:25:02.460 "abort": true, 00:25:02.460 "seek_hole": false, 00:25:02.460 "seek_data": false, 00:25:02.460 "copy": true, 00:25:02.460 "nvme_iov_md": false 00:25:02.460 }, 00:25:02.460 "memory_domains": [ 00:25:02.460 { 00:25:02.460 "dma_device_id": "system", 00:25:02.460 "dma_device_type": 1 00:25:02.460 }, 00:25:02.460 { 00:25:02.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:02.460 "dma_device_type": 2 00:25:02.460 } 00:25:02.460 ], 00:25:02.460 "driver_specific": {} 00:25:02.460 } 00:25:02.460 ] 00:25:02.460 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.460 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:02.460 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:02.460 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:02.460 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:02.460 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:02.460 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:02.460 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:02.460 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:02.460 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:02.460 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:02.460 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:02.460 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:02.460 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.460 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:02.460 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:02.460 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.460 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:02.460 "name": "Existed_Raid", 00:25:02.460 "uuid": "8c75b6be-c718-4965-adc2-7b6eb735848e", 00:25:02.460 "strip_size_kb": 64, 00:25:02.460 "state": "configuring", 00:25:02.460 "raid_level": "concat", 00:25:02.460 "superblock": true, 00:25:02.460 "num_base_bdevs": 4, 00:25:02.460 "num_base_bdevs_discovered": 1, 00:25:02.460 "num_base_bdevs_operational": 4, 00:25:02.460 "base_bdevs_list": [ 00:25:02.460 { 00:25:02.460 "name": "BaseBdev1", 00:25:02.460 "uuid": "e0e6b712-bc71-4348-ad06-322b49291ef3", 00:25:02.460 "is_configured": true, 00:25:02.460 "data_offset": 2048, 00:25:02.460 "data_size": 63488 00:25:02.460 }, 00:25:02.460 { 00:25:02.460 "name": "BaseBdev2", 00:25:02.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.460 "is_configured": false, 00:25:02.460 "data_offset": 0, 00:25:02.460 "data_size": 0 00:25:02.460 }, 00:25:02.460 { 00:25:02.460 "name": "BaseBdev3", 00:25:02.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.460 "is_configured": false, 00:25:02.460 "data_offset": 0, 00:25:02.460 "data_size": 0 00:25:02.460 }, 00:25:02.460 { 00:25:02.460 "name": "BaseBdev4", 00:25:02.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:02.460 "is_configured": false, 00:25:02.460 "data_offset": 0, 00:25:02.460 "data_size": 0 00:25:02.460 } 00:25:02.460 ] 00:25:02.460 }' 00:25:02.460 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:02.460 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:03.026 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:03.026 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.026 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:03.026 [2024-11-20 13:45:05.740610] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:03.026 [2024-11-20 13:45:05.741129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:03.026 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.026 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:03.026 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.027 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:03.027 [2024-11-20 13:45:05.748717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:03.027 [2024-11-20 13:45:05.752262] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:03.027 [2024-11-20 13:45:05.752367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:03.027 [2024-11-20 13:45:05.752388] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:03.027 [2024-11-20 13:45:05.752433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:03.027 [2024-11-20 13:45:05.752446] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:03.027 [2024-11-20 13:45:05.752463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:03.027 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.027 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:03.027 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:03.027 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:03.027 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:03.027 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:03.027 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:03.027 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:03.027 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:03.027 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:03.027 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:03.027 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:03.027 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:03.027 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.027 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.027 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:03.027 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:03.027 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.027 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:03.027 "name": "Existed_Raid", 00:25:03.027 "uuid": "d80621ea-6551-446a-b2be-2f15bca28644", 00:25:03.027 "strip_size_kb": 64, 00:25:03.027 "state": "configuring", 00:25:03.027 "raid_level": "concat", 00:25:03.027 "superblock": true, 00:25:03.027 "num_base_bdevs": 4, 00:25:03.027 "num_base_bdevs_discovered": 1, 00:25:03.027 "num_base_bdevs_operational": 4, 00:25:03.027 "base_bdevs_list": [ 00:25:03.027 { 00:25:03.027 "name": "BaseBdev1", 00:25:03.027 "uuid": "e0e6b712-bc71-4348-ad06-322b49291ef3", 00:25:03.027 "is_configured": true, 00:25:03.027 "data_offset": 2048, 00:25:03.027 "data_size": 63488 00:25:03.027 }, 00:25:03.027 { 00:25:03.027 "name": "BaseBdev2", 00:25:03.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.027 "is_configured": false, 00:25:03.027 "data_offset": 0, 00:25:03.027 "data_size": 0 00:25:03.027 }, 00:25:03.027 { 00:25:03.027 "name": "BaseBdev3", 00:25:03.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.027 "is_configured": false, 00:25:03.027 "data_offset": 0, 00:25:03.027 "data_size": 0 00:25:03.027 }, 00:25:03.027 { 00:25:03.027 "name": "BaseBdev4", 00:25:03.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.027 "is_configured": false, 00:25:03.027 "data_offset": 0, 00:25:03.027 "data_size": 0 00:25:03.027 } 00:25:03.027 ] 00:25:03.027 }' 00:25:03.027 13:45:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:03.027 13:45:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:03.594 [2024-11-20 13:45:06.323258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:03.594 BaseBdev2 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:03.594 [ 00:25:03.594 { 00:25:03.594 "name": "BaseBdev2", 00:25:03.594 "aliases": [ 00:25:03.594 "40523e20-d4b1-4d40-b18c-22aaa81cd233" 00:25:03.594 ], 00:25:03.594 "product_name": "Malloc disk", 00:25:03.594 "block_size": 512, 00:25:03.594 "num_blocks": 65536, 00:25:03.594 "uuid": "40523e20-d4b1-4d40-b18c-22aaa81cd233", 00:25:03.594 "assigned_rate_limits": { 00:25:03.594 "rw_ios_per_sec": 0, 00:25:03.594 "rw_mbytes_per_sec": 0, 00:25:03.594 "r_mbytes_per_sec": 0, 00:25:03.594 "w_mbytes_per_sec": 0 00:25:03.594 }, 00:25:03.594 "claimed": true, 00:25:03.594 "claim_type": "exclusive_write", 00:25:03.594 "zoned": false, 00:25:03.594 "supported_io_types": { 00:25:03.594 "read": true, 00:25:03.594 "write": true, 00:25:03.594 "unmap": true, 00:25:03.594 "flush": true, 00:25:03.594 "reset": true, 00:25:03.594 "nvme_admin": false, 00:25:03.594 "nvme_io": false, 00:25:03.594 "nvme_io_md": false, 00:25:03.594 "write_zeroes": true, 00:25:03.594 "zcopy": true, 00:25:03.594 "get_zone_info": false, 00:25:03.594 "zone_management": false, 00:25:03.594 "zone_append": false, 00:25:03.594 "compare": false, 00:25:03.594 "compare_and_write": false, 00:25:03.594 "abort": true, 00:25:03.594 "seek_hole": false, 00:25:03.594 "seek_data": false, 00:25:03.594 "copy": true, 00:25:03.594 "nvme_iov_md": false 00:25:03.594 }, 00:25:03.594 "memory_domains": [ 00:25:03.594 { 00:25:03.594 "dma_device_id": "system", 00:25:03.594 "dma_device_type": 1 00:25:03.594 }, 00:25:03.594 { 00:25:03.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.594 "dma_device_type": 2 00:25:03.594 } 00:25:03.594 ], 00:25:03.594 "driver_specific": {} 00:25:03.594 } 00:25:03.594 ] 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:03.594 "name": "Existed_Raid", 00:25:03.594 "uuid": "d80621ea-6551-446a-b2be-2f15bca28644", 00:25:03.594 "strip_size_kb": 64, 00:25:03.594 "state": "configuring", 00:25:03.594 "raid_level": "concat", 00:25:03.594 "superblock": true, 00:25:03.594 "num_base_bdevs": 4, 00:25:03.594 "num_base_bdevs_discovered": 2, 00:25:03.594 "num_base_bdevs_operational": 4, 00:25:03.594 "base_bdevs_list": [ 00:25:03.594 { 00:25:03.594 "name": "BaseBdev1", 00:25:03.594 "uuid": "e0e6b712-bc71-4348-ad06-322b49291ef3", 00:25:03.594 "is_configured": true, 00:25:03.594 "data_offset": 2048, 00:25:03.594 "data_size": 63488 00:25:03.594 }, 00:25:03.594 { 00:25:03.594 "name": "BaseBdev2", 00:25:03.594 "uuid": "40523e20-d4b1-4d40-b18c-22aaa81cd233", 00:25:03.594 "is_configured": true, 00:25:03.594 "data_offset": 2048, 00:25:03.594 "data_size": 63488 00:25:03.594 }, 00:25:03.594 { 00:25:03.594 "name": "BaseBdev3", 00:25:03.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.594 "is_configured": false, 00:25:03.594 "data_offset": 0, 00:25:03.594 "data_size": 0 00:25:03.594 }, 00:25:03.594 { 00:25:03.594 "name": "BaseBdev4", 00:25:03.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.594 "is_configured": false, 00:25:03.594 "data_offset": 0, 00:25:03.594 "data_size": 0 00:25:03.594 } 00:25:03.594 ] 00:25:03.594 }' 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:03.594 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:04.162 13:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:04.162 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.162 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:04.162 [2024-11-20 13:45:06.966659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:04.162 BaseBdev3 00:25:04.162 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.162 13:45:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:25:04.162 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:04.162 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:04.162 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:04.162 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:04.162 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:04.162 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:04.162 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.162 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:04.162 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.162 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:04.162 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.162 13:45:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:04.162 [ 00:25:04.162 { 00:25:04.162 "name": "BaseBdev3", 00:25:04.162 "aliases": [ 00:25:04.162 "2f175725-6f73-4009-93a5-e21e44f1bc0a" 00:25:04.162 ], 00:25:04.162 "product_name": "Malloc disk", 00:25:04.162 "block_size": 512, 00:25:04.162 "num_blocks": 65536, 00:25:04.162 "uuid": "2f175725-6f73-4009-93a5-e21e44f1bc0a", 00:25:04.162 "assigned_rate_limits": { 00:25:04.162 "rw_ios_per_sec": 0, 00:25:04.162 "rw_mbytes_per_sec": 0, 00:25:04.162 "r_mbytes_per_sec": 0, 00:25:04.162 "w_mbytes_per_sec": 0 00:25:04.162 }, 00:25:04.162 "claimed": true, 00:25:04.162 "claim_type": "exclusive_write", 00:25:04.162 "zoned": false, 00:25:04.162 "supported_io_types": { 00:25:04.162 "read": true, 00:25:04.162 "write": true, 00:25:04.162 "unmap": true, 00:25:04.162 "flush": true, 00:25:04.162 "reset": true, 00:25:04.162 "nvme_admin": false, 00:25:04.162 "nvme_io": false, 00:25:04.162 "nvme_io_md": false, 00:25:04.162 "write_zeroes": true, 00:25:04.162 "zcopy": true, 00:25:04.162 "get_zone_info": false, 00:25:04.162 "zone_management": false, 00:25:04.162 "zone_append": false, 00:25:04.162 "compare": false, 00:25:04.162 "compare_and_write": false, 00:25:04.162 "abort": true, 00:25:04.162 "seek_hole": false, 00:25:04.162 "seek_data": false, 00:25:04.162 "copy": true, 00:25:04.162 "nvme_iov_md": false 00:25:04.162 }, 00:25:04.162 "memory_domains": [ 00:25:04.162 { 00:25:04.162 "dma_device_id": "system", 00:25:04.162 "dma_device_type": 1 00:25:04.162 }, 00:25:04.162 { 00:25:04.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:04.162 "dma_device_type": 2 00:25:04.162 } 00:25:04.162 ], 00:25:04.162 "driver_specific": {} 00:25:04.162 } 00:25:04.162 ] 00:25:04.162 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.162 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:04.162 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:04.162 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:04.162 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:04.162 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:04.162 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:04.162 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:04.162 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:04.162 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:04.162 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:04.162 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:04.162 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:04.162 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:04.162 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.162 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.162 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:04.162 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:04.162 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.162 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:04.162 "name": "Existed_Raid", 00:25:04.162 "uuid": "d80621ea-6551-446a-b2be-2f15bca28644", 00:25:04.162 "strip_size_kb": 64, 00:25:04.162 "state": "configuring", 00:25:04.162 "raid_level": "concat", 00:25:04.162 "superblock": true, 00:25:04.162 "num_base_bdevs": 4, 00:25:04.162 "num_base_bdevs_discovered": 3, 00:25:04.162 "num_base_bdevs_operational": 4, 00:25:04.163 "base_bdevs_list": [ 00:25:04.163 { 00:25:04.163 "name": "BaseBdev1", 00:25:04.163 "uuid": "e0e6b712-bc71-4348-ad06-322b49291ef3", 00:25:04.163 "is_configured": true, 00:25:04.163 "data_offset": 2048, 00:25:04.163 "data_size": 63488 00:25:04.163 }, 00:25:04.163 { 00:25:04.163 "name": "BaseBdev2", 00:25:04.163 "uuid": "40523e20-d4b1-4d40-b18c-22aaa81cd233", 00:25:04.163 "is_configured": true, 00:25:04.163 "data_offset": 2048, 00:25:04.163 "data_size": 63488 00:25:04.163 }, 00:25:04.163 { 00:25:04.163 "name": "BaseBdev3", 00:25:04.163 "uuid": "2f175725-6f73-4009-93a5-e21e44f1bc0a", 00:25:04.163 "is_configured": true, 00:25:04.163 "data_offset": 2048, 00:25:04.163 "data_size": 63488 00:25:04.163 }, 00:25:04.163 { 00:25:04.163 "name": "BaseBdev4", 00:25:04.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.163 "is_configured": false, 00:25:04.163 "data_offset": 0, 00:25:04.163 "data_size": 0 00:25:04.163 } 00:25:04.163 ] 00:25:04.163 }' 00:25:04.163 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:04.163 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:04.730 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:25:04.730 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.730 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:04.730 [2024-11-20 13:45:07.573275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:04.730 [2024-11-20 13:45:07.573633] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:04.730 [2024-11-20 13:45:07.573654] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:04.730 BaseBdev4 00:25:04.730 [2024-11-20 13:45:07.574128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:04.730 [2024-11-20 13:45:07.574437] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:04.730 [2024-11-20 13:45:07.574468] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:04.730 [2024-11-20 13:45:07.574715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:04.730 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.730 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:25:04.730 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:25:04.730 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:04.730 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:04.730 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:04.731 [ 00:25:04.731 { 00:25:04.731 "name": "BaseBdev4", 00:25:04.731 "aliases": [ 00:25:04.731 "d4c7eda5-b063-4855-96d2-a2464d621d37" 00:25:04.731 ], 00:25:04.731 "product_name": "Malloc disk", 00:25:04.731 "block_size": 512, 00:25:04.731 "num_blocks": 65536, 00:25:04.731 "uuid": "d4c7eda5-b063-4855-96d2-a2464d621d37", 00:25:04.731 "assigned_rate_limits": { 00:25:04.731 "rw_ios_per_sec": 0, 00:25:04.731 "rw_mbytes_per_sec": 0, 00:25:04.731 "r_mbytes_per_sec": 0, 00:25:04.731 "w_mbytes_per_sec": 0 00:25:04.731 }, 00:25:04.731 "claimed": true, 00:25:04.731 "claim_type": "exclusive_write", 00:25:04.731 "zoned": false, 00:25:04.731 "supported_io_types": { 00:25:04.731 "read": true, 00:25:04.731 "write": true, 00:25:04.731 "unmap": true, 00:25:04.731 "flush": true, 00:25:04.731 "reset": true, 00:25:04.731 "nvme_admin": false, 00:25:04.731 "nvme_io": false, 00:25:04.731 "nvme_io_md": false, 00:25:04.731 "write_zeroes": true, 00:25:04.731 "zcopy": true, 00:25:04.731 "get_zone_info": false, 00:25:04.731 "zone_management": false, 00:25:04.731 "zone_append": false, 00:25:04.731 "compare": false, 00:25:04.731 "compare_and_write": false, 00:25:04.731 "abort": true, 00:25:04.731 "seek_hole": false, 00:25:04.731 "seek_data": false, 00:25:04.731 "copy": true, 00:25:04.731 "nvme_iov_md": false 00:25:04.731 }, 00:25:04.731 "memory_domains": [ 00:25:04.731 { 00:25:04.731 "dma_device_id": "system", 00:25:04.731 "dma_device_type": 1 00:25:04.731 }, 00:25:04.731 { 00:25:04.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:04.731 "dma_device_type": 2 00:25:04.731 } 00:25:04.731 ], 00:25:04.731 "driver_specific": {} 00:25:04.731 } 00:25:04.731 ] 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:04.731 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.990 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:04.990 "name": "Existed_Raid", 00:25:04.990 "uuid": "d80621ea-6551-446a-b2be-2f15bca28644", 00:25:04.990 "strip_size_kb": 64, 00:25:04.990 "state": "online", 00:25:04.990 "raid_level": "concat", 00:25:04.990 "superblock": true, 00:25:04.990 "num_base_bdevs": 4, 00:25:04.990 "num_base_bdevs_discovered": 4, 00:25:04.990 "num_base_bdevs_operational": 4, 00:25:04.990 "base_bdevs_list": [ 00:25:04.990 { 00:25:04.990 "name": "BaseBdev1", 00:25:04.990 "uuid": "e0e6b712-bc71-4348-ad06-322b49291ef3", 00:25:04.990 "is_configured": true, 00:25:04.990 "data_offset": 2048, 00:25:04.990 "data_size": 63488 00:25:04.990 }, 00:25:04.990 { 00:25:04.990 "name": "BaseBdev2", 00:25:04.990 "uuid": "40523e20-d4b1-4d40-b18c-22aaa81cd233", 00:25:04.990 "is_configured": true, 00:25:04.990 "data_offset": 2048, 00:25:04.990 "data_size": 63488 00:25:04.990 }, 00:25:04.990 { 00:25:04.990 "name": "BaseBdev3", 00:25:04.990 "uuid": "2f175725-6f73-4009-93a5-e21e44f1bc0a", 00:25:04.990 "is_configured": true, 00:25:04.990 "data_offset": 2048, 00:25:04.990 "data_size": 63488 00:25:04.990 }, 00:25:04.990 { 00:25:04.990 "name": "BaseBdev4", 00:25:04.990 "uuid": "d4c7eda5-b063-4855-96d2-a2464d621d37", 00:25:04.990 "is_configured": true, 00:25:04.990 "data_offset": 2048, 00:25:04.990 "data_size": 63488 00:25:04.990 } 00:25:04.990 ] 00:25:04.990 }' 00:25:04.990 13:45:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:04.990 13:45:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:05.249 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:05.249 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:05.249 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:05.249 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:05.249 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:25:05.249 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:05.249 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:05.249 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:05.249 13:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.249 13:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:05.508 [2024-11-20 13:45:08.170138] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:05.508 13:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.508 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:05.508 "name": "Existed_Raid", 00:25:05.508 "aliases": [ 00:25:05.508 "d80621ea-6551-446a-b2be-2f15bca28644" 00:25:05.508 ], 00:25:05.508 "product_name": "Raid Volume", 00:25:05.508 "block_size": 512, 00:25:05.508 "num_blocks": 253952, 00:25:05.508 "uuid": "d80621ea-6551-446a-b2be-2f15bca28644", 00:25:05.508 "assigned_rate_limits": { 00:25:05.508 "rw_ios_per_sec": 0, 00:25:05.508 "rw_mbytes_per_sec": 0, 00:25:05.508 "r_mbytes_per_sec": 0, 00:25:05.508 "w_mbytes_per_sec": 0 00:25:05.508 }, 00:25:05.508 "claimed": false, 00:25:05.508 "zoned": false, 00:25:05.508 "supported_io_types": { 00:25:05.508 "read": true, 00:25:05.508 "write": true, 00:25:05.508 "unmap": true, 00:25:05.508 "flush": true, 00:25:05.508 "reset": true, 00:25:05.508 "nvme_admin": false, 00:25:05.508 "nvme_io": false, 00:25:05.508 "nvme_io_md": false, 00:25:05.508 "write_zeroes": true, 00:25:05.508 "zcopy": false, 00:25:05.508 "get_zone_info": false, 00:25:05.508 "zone_management": false, 00:25:05.508 "zone_append": false, 00:25:05.508 "compare": false, 00:25:05.508 "compare_and_write": false, 00:25:05.508 "abort": false, 00:25:05.508 "seek_hole": false, 00:25:05.508 "seek_data": false, 00:25:05.508 "copy": false, 00:25:05.508 "nvme_iov_md": false 00:25:05.508 }, 00:25:05.508 "memory_domains": [ 00:25:05.508 { 00:25:05.508 "dma_device_id": "system", 00:25:05.508 "dma_device_type": 1 00:25:05.508 }, 00:25:05.508 { 00:25:05.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.508 "dma_device_type": 2 00:25:05.508 }, 00:25:05.508 { 00:25:05.508 "dma_device_id": "system", 00:25:05.508 "dma_device_type": 1 00:25:05.508 }, 00:25:05.508 { 00:25:05.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.508 "dma_device_type": 2 00:25:05.508 }, 00:25:05.508 { 00:25:05.508 "dma_device_id": "system", 00:25:05.508 "dma_device_type": 1 00:25:05.508 }, 00:25:05.508 { 00:25:05.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.508 "dma_device_type": 2 00:25:05.508 }, 00:25:05.508 { 00:25:05.508 "dma_device_id": "system", 00:25:05.508 "dma_device_type": 1 00:25:05.508 }, 00:25:05.508 { 00:25:05.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.508 "dma_device_type": 2 00:25:05.508 } 00:25:05.508 ], 00:25:05.508 "driver_specific": { 00:25:05.508 "raid": { 00:25:05.508 "uuid": "d80621ea-6551-446a-b2be-2f15bca28644", 00:25:05.508 "strip_size_kb": 64, 00:25:05.508 "state": "online", 00:25:05.508 "raid_level": "concat", 00:25:05.508 "superblock": true, 00:25:05.508 "num_base_bdevs": 4, 00:25:05.508 "num_base_bdevs_discovered": 4, 00:25:05.508 "num_base_bdevs_operational": 4, 00:25:05.508 "base_bdevs_list": [ 00:25:05.508 { 00:25:05.508 "name": "BaseBdev1", 00:25:05.508 "uuid": "e0e6b712-bc71-4348-ad06-322b49291ef3", 00:25:05.508 "is_configured": true, 00:25:05.508 "data_offset": 2048, 00:25:05.508 "data_size": 63488 00:25:05.508 }, 00:25:05.508 { 00:25:05.508 "name": "BaseBdev2", 00:25:05.508 "uuid": "40523e20-d4b1-4d40-b18c-22aaa81cd233", 00:25:05.508 "is_configured": true, 00:25:05.508 "data_offset": 2048, 00:25:05.508 "data_size": 63488 00:25:05.508 }, 00:25:05.508 { 00:25:05.508 "name": "BaseBdev3", 00:25:05.508 "uuid": "2f175725-6f73-4009-93a5-e21e44f1bc0a", 00:25:05.508 "is_configured": true, 00:25:05.508 "data_offset": 2048, 00:25:05.508 "data_size": 63488 00:25:05.508 }, 00:25:05.508 { 00:25:05.508 "name": "BaseBdev4", 00:25:05.508 "uuid": "d4c7eda5-b063-4855-96d2-a2464d621d37", 00:25:05.508 "is_configured": true, 00:25:05.508 "data_offset": 2048, 00:25:05.508 "data_size": 63488 00:25:05.508 } 00:25:05.508 ] 00:25:05.508 } 00:25:05.508 } 00:25:05.508 }' 00:25:05.508 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:05.508 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:05.508 BaseBdev2 00:25:05.508 BaseBdev3 00:25:05.508 BaseBdev4' 00:25:05.508 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:05.508 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:05.508 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:05.508 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:05.508 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:05.508 13:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.508 13:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:05.508 13:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.508 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:05.508 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:05.508 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:05.508 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:05.508 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:05.508 13:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.508 13:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:05.508 13:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:05.768 [2024-11-20 13:45:08.557961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:05.768 [2024-11-20 13:45:08.558012] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:05.768 [2024-11-20 13:45:08.558095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:05.768 13:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.027 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:06.027 "name": "Existed_Raid", 00:25:06.027 "uuid": "d80621ea-6551-446a-b2be-2f15bca28644", 00:25:06.027 "strip_size_kb": 64, 00:25:06.027 "state": "offline", 00:25:06.027 "raid_level": "concat", 00:25:06.027 "superblock": true, 00:25:06.027 "num_base_bdevs": 4, 00:25:06.027 "num_base_bdevs_discovered": 3, 00:25:06.027 "num_base_bdevs_operational": 3, 00:25:06.027 "base_bdevs_list": [ 00:25:06.027 { 00:25:06.027 "name": null, 00:25:06.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.027 "is_configured": false, 00:25:06.027 "data_offset": 0, 00:25:06.027 "data_size": 63488 00:25:06.027 }, 00:25:06.027 { 00:25:06.027 "name": "BaseBdev2", 00:25:06.027 "uuid": "40523e20-d4b1-4d40-b18c-22aaa81cd233", 00:25:06.027 "is_configured": true, 00:25:06.027 "data_offset": 2048, 00:25:06.027 "data_size": 63488 00:25:06.027 }, 00:25:06.027 { 00:25:06.027 "name": "BaseBdev3", 00:25:06.027 "uuid": "2f175725-6f73-4009-93a5-e21e44f1bc0a", 00:25:06.027 "is_configured": true, 00:25:06.027 "data_offset": 2048, 00:25:06.027 "data_size": 63488 00:25:06.027 }, 00:25:06.027 { 00:25:06.027 "name": "BaseBdev4", 00:25:06.027 "uuid": "d4c7eda5-b063-4855-96d2-a2464d621d37", 00:25:06.027 "is_configured": true, 00:25:06.027 "data_offset": 2048, 00:25:06.027 "data_size": 63488 00:25:06.027 } 00:25:06.027 ] 00:25:06.027 }' 00:25:06.027 13:45:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:06.027 13:45:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:06.595 [2024-11-20 13:45:09.264332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:06.595 [2024-11-20 13:45:09.417053] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:06.595 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:06.855 [2024-11-20 13:45:09.565719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:06.855 [2024-11-20 13:45:09.565785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:06.855 BaseBdev2 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:06.855 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.856 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:06.856 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.856 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.115 [ 00:25:07.115 { 00:25:07.115 "name": "BaseBdev2", 00:25:07.115 "aliases": [ 00:25:07.115 "ccaf1c3f-09fb-478a-b3d4-ef67786d43c3" 00:25:07.115 ], 00:25:07.115 "product_name": "Malloc disk", 00:25:07.115 "block_size": 512, 00:25:07.115 "num_blocks": 65536, 00:25:07.115 "uuid": "ccaf1c3f-09fb-478a-b3d4-ef67786d43c3", 00:25:07.115 "assigned_rate_limits": { 00:25:07.115 "rw_ios_per_sec": 0, 00:25:07.115 "rw_mbytes_per_sec": 0, 00:25:07.115 "r_mbytes_per_sec": 0, 00:25:07.115 "w_mbytes_per_sec": 0 00:25:07.115 }, 00:25:07.115 "claimed": false, 00:25:07.115 "zoned": false, 00:25:07.115 "supported_io_types": { 00:25:07.115 "read": true, 00:25:07.115 "write": true, 00:25:07.115 "unmap": true, 00:25:07.115 "flush": true, 00:25:07.115 "reset": true, 00:25:07.115 "nvme_admin": false, 00:25:07.115 "nvme_io": false, 00:25:07.115 "nvme_io_md": false, 00:25:07.115 "write_zeroes": true, 00:25:07.115 "zcopy": true, 00:25:07.115 "get_zone_info": false, 00:25:07.115 "zone_management": false, 00:25:07.115 "zone_append": false, 00:25:07.115 "compare": false, 00:25:07.115 "compare_and_write": false, 00:25:07.115 "abort": true, 00:25:07.115 "seek_hole": false, 00:25:07.115 "seek_data": false, 00:25:07.115 "copy": true, 00:25:07.115 "nvme_iov_md": false 00:25:07.115 }, 00:25:07.115 "memory_domains": [ 00:25:07.115 { 00:25:07.115 "dma_device_id": "system", 00:25:07.115 "dma_device_type": 1 00:25:07.115 }, 00:25:07.115 { 00:25:07.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.115 "dma_device_type": 2 00:25:07.115 } 00:25:07.115 ], 00:25:07.115 "driver_specific": {} 00:25:07.115 } 00:25:07.115 ] 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.115 BaseBdev3 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.115 [ 00:25:07.115 { 00:25:07.115 "name": "BaseBdev3", 00:25:07.115 "aliases": [ 00:25:07.115 "8348b0aa-3b66-45f9-aee8-006a2510ce11" 00:25:07.115 ], 00:25:07.115 "product_name": "Malloc disk", 00:25:07.115 "block_size": 512, 00:25:07.115 "num_blocks": 65536, 00:25:07.115 "uuid": "8348b0aa-3b66-45f9-aee8-006a2510ce11", 00:25:07.115 "assigned_rate_limits": { 00:25:07.115 "rw_ios_per_sec": 0, 00:25:07.115 "rw_mbytes_per_sec": 0, 00:25:07.115 "r_mbytes_per_sec": 0, 00:25:07.115 "w_mbytes_per_sec": 0 00:25:07.115 }, 00:25:07.115 "claimed": false, 00:25:07.115 "zoned": false, 00:25:07.115 "supported_io_types": { 00:25:07.115 "read": true, 00:25:07.115 "write": true, 00:25:07.115 "unmap": true, 00:25:07.115 "flush": true, 00:25:07.115 "reset": true, 00:25:07.115 "nvme_admin": false, 00:25:07.115 "nvme_io": false, 00:25:07.115 "nvme_io_md": false, 00:25:07.115 "write_zeroes": true, 00:25:07.115 "zcopy": true, 00:25:07.115 "get_zone_info": false, 00:25:07.115 "zone_management": false, 00:25:07.115 "zone_append": false, 00:25:07.115 "compare": false, 00:25:07.115 "compare_and_write": false, 00:25:07.115 "abort": true, 00:25:07.115 "seek_hole": false, 00:25:07.115 "seek_data": false, 00:25:07.115 "copy": true, 00:25:07.115 "nvme_iov_md": false 00:25:07.115 }, 00:25:07.115 "memory_domains": [ 00:25:07.115 { 00:25:07.115 "dma_device_id": "system", 00:25:07.115 "dma_device_type": 1 00:25:07.115 }, 00:25:07.115 { 00:25:07.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.115 "dma_device_type": 2 00:25:07.115 } 00:25:07.115 ], 00:25:07.115 "driver_specific": {} 00:25:07.115 } 00:25:07.115 ] 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.115 BaseBdev4 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.115 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.115 [ 00:25:07.115 { 00:25:07.115 "name": "BaseBdev4", 00:25:07.115 "aliases": [ 00:25:07.115 "0256c39e-affe-4088-8b5e-75eb41a796a2" 00:25:07.115 ], 00:25:07.115 "product_name": "Malloc disk", 00:25:07.115 "block_size": 512, 00:25:07.115 "num_blocks": 65536, 00:25:07.115 "uuid": "0256c39e-affe-4088-8b5e-75eb41a796a2", 00:25:07.115 "assigned_rate_limits": { 00:25:07.115 "rw_ios_per_sec": 0, 00:25:07.115 "rw_mbytes_per_sec": 0, 00:25:07.115 "r_mbytes_per_sec": 0, 00:25:07.115 "w_mbytes_per_sec": 0 00:25:07.115 }, 00:25:07.115 "claimed": false, 00:25:07.115 "zoned": false, 00:25:07.115 "supported_io_types": { 00:25:07.115 "read": true, 00:25:07.115 "write": true, 00:25:07.115 "unmap": true, 00:25:07.115 "flush": true, 00:25:07.115 "reset": true, 00:25:07.115 "nvme_admin": false, 00:25:07.116 "nvme_io": false, 00:25:07.116 "nvme_io_md": false, 00:25:07.116 "write_zeroes": true, 00:25:07.116 "zcopy": true, 00:25:07.116 "get_zone_info": false, 00:25:07.116 "zone_management": false, 00:25:07.116 "zone_append": false, 00:25:07.116 "compare": false, 00:25:07.116 "compare_and_write": false, 00:25:07.116 "abort": true, 00:25:07.116 "seek_hole": false, 00:25:07.116 "seek_data": false, 00:25:07.116 "copy": true, 00:25:07.116 "nvme_iov_md": false 00:25:07.116 }, 00:25:07.116 "memory_domains": [ 00:25:07.116 { 00:25:07.116 "dma_device_id": "system", 00:25:07.116 "dma_device_type": 1 00:25:07.116 }, 00:25:07.116 { 00:25:07.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.116 "dma_device_type": 2 00:25:07.116 } 00:25:07.116 ], 00:25:07.116 "driver_specific": {} 00:25:07.116 } 00:25:07.116 ] 00:25:07.116 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.116 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:07.116 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:07.116 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:07.116 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:07.116 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.116 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.116 [2024-11-20 13:45:09.946728] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:07.116 [2024-11-20 13:45:09.946788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:07.116 [2024-11-20 13:45:09.946825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:07.116 [2024-11-20 13:45:09.949313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:07.116 [2024-11-20 13:45:09.949520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:07.116 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.116 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:07.116 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:07.116 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:07.116 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:07.116 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:07.116 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:07.116 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:07.116 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:07.116 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:07.116 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:07.116 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.116 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.116 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.116 13:45:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:07.116 13:45:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.116 13:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:07.116 "name": "Existed_Raid", 00:25:07.116 "uuid": "7ff93063-21ec-4936-8211-6de297db51ce", 00:25:07.116 "strip_size_kb": 64, 00:25:07.116 "state": "configuring", 00:25:07.116 "raid_level": "concat", 00:25:07.116 "superblock": true, 00:25:07.116 "num_base_bdevs": 4, 00:25:07.116 "num_base_bdevs_discovered": 3, 00:25:07.116 "num_base_bdevs_operational": 4, 00:25:07.116 "base_bdevs_list": [ 00:25:07.116 { 00:25:07.116 "name": "BaseBdev1", 00:25:07.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.116 "is_configured": false, 00:25:07.116 "data_offset": 0, 00:25:07.116 "data_size": 0 00:25:07.116 }, 00:25:07.116 { 00:25:07.116 "name": "BaseBdev2", 00:25:07.116 "uuid": "ccaf1c3f-09fb-478a-b3d4-ef67786d43c3", 00:25:07.116 "is_configured": true, 00:25:07.116 "data_offset": 2048, 00:25:07.116 "data_size": 63488 00:25:07.116 }, 00:25:07.116 { 00:25:07.116 "name": "BaseBdev3", 00:25:07.116 "uuid": "8348b0aa-3b66-45f9-aee8-006a2510ce11", 00:25:07.116 "is_configured": true, 00:25:07.116 "data_offset": 2048, 00:25:07.116 "data_size": 63488 00:25:07.116 }, 00:25:07.116 { 00:25:07.116 "name": "BaseBdev4", 00:25:07.116 "uuid": "0256c39e-affe-4088-8b5e-75eb41a796a2", 00:25:07.116 "is_configured": true, 00:25:07.116 "data_offset": 2048, 00:25:07.116 "data_size": 63488 00:25:07.116 } 00:25:07.116 ] 00:25:07.116 }' 00:25:07.116 13:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:07.116 13:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.683 13:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:25:07.683 13:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.683 13:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.683 [2024-11-20 13:45:10.510933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:07.683 13:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.683 13:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:07.683 13:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:07.683 13:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:07.683 13:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:07.683 13:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:07.683 13:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:07.683 13:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:07.683 13:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:07.683 13:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:07.683 13:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:07.683 13:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:07.683 13:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:07.683 13:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.683 13:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.683 13:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.683 13:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:07.683 "name": "Existed_Raid", 00:25:07.683 "uuid": "7ff93063-21ec-4936-8211-6de297db51ce", 00:25:07.683 "strip_size_kb": 64, 00:25:07.683 "state": "configuring", 00:25:07.683 "raid_level": "concat", 00:25:07.683 "superblock": true, 00:25:07.683 "num_base_bdevs": 4, 00:25:07.683 "num_base_bdevs_discovered": 2, 00:25:07.683 "num_base_bdevs_operational": 4, 00:25:07.683 "base_bdevs_list": [ 00:25:07.683 { 00:25:07.683 "name": "BaseBdev1", 00:25:07.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.683 "is_configured": false, 00:25:07.683 "data_offset": 0, 00:25:07.683 "data_size": 0 00:25:07.683 }, 00:25:07.683 { 00:25:07.683 "name": null, 00:25:07.683 "uuid": "ccaf1c3f-09fb-478a-b3d4-ef67786d43c3", 00:25:07.683 "is_configured": false, 00:25:07.683 "data_offset": 0, 00:25:07.683 "data_size": 63488 00:25:07.683 }, 00:25:07.683 { 00:25:07.683 "name": "BaseBdev3", 00:25:07.683 "uuid": "8348b0aa-3b66-45f9-aee8-006a2510ce11", 00:25:07.683 "is_configured": true, 00:25:07.683 "data_offset": 2048, 00:25:07.683 "data_size": 63488 00:25:07.683 }, 00:25:07.683 { 00:25:07.683 "name": "BaseBdev4", 00:25:07.683 "uuid": "0256c39e-affe-4088-8b5e-75eb41a796a2", 00:25:07.683 "is_configured": true, 00:25:07.683 "data_offset": 2048, 00:25:07.683 "data_size": 63488 00:25:07.683 } 00:25:07.683 ] 00:25:07.683 }' 00:25:07.683 13:45:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:07.683 13:45:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.250 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.250 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:08.250 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.250 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.250 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.250 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:25:08.250 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:08.250 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.250 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.250 [2024-11-20 13:45:11.141191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:08.250 BaseBdev1 00:25:08.251 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.251 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:25:08.251 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:08.251 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:08.251 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:08.251 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:08.251 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:08.251 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:08.251 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.251 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.251 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.251 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:08.251 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.251 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.510 [ 00:25:08.510 { 00:25:08.510 "name": "BaseBdev1", 00:25:08.510 "aliases": [ 00:25:08.510 "d15152ae-7bad-40d6-90e2-e8dca3ae191b" 00:25:08.510 ], 00:25:08.510 "product_name": "Malloc disk", 00:25:08.510 "block_size": 512, 00:25:08.510 "num_blocks": 65536, 00:25:08.510 "uuid": "d15152ae-7bad-40d6-90e2-e8dca3ae191b", 00:25:08.510 "assigned_rate_limits": { 00:25:08.510 "rw_ios_per_sec": 0, 00:25:08.510 "rw_mbytes_per_sec": 0, 00:25:08.510 "r_mbytes_per_sec": 0, 00:25:08.510 "w_mbytes_per_sec": 0 00:25:08.510 }, 00:25:08.510 "claimed": true, 00:25:08.510 "claim_type": "exclusive_write", 00:25:08.510 "zoned": false, 00:25:08.510 "supported_io_types": { 00:25:08.510 "read": true, 00:25:08.510 "write": true, 00:25:08.510 "unmap": true, 00:25:08.510 "flush": true, 00:25:08.510 "reset": true, 00:25:08.510 "nvme_admin": false, 00:25:08.510 "nvme_io": false, 00:25:08.510 "nvme_io_md": false, 00:25:08.510 "write_zeroes": true, 00:25:08.510 "zcopy": true, 00:25:08.510 "get_zone_info": false, 00:25:08.510 "zone_management": false, 00:25:08.510 "zone_append": false, 00:25:08.510 "compare": false, 00:25:08.510 "compare_and_write": false, 00:25:08.510 "abort": true, 00:25:08.510 "seek_hole": false, 00:25:08.510 "seek_data": false, 00:25:08.510 "copy": true, 00:25:08.510 "nvme_iov_md": false 00:25:08.510 }, 00:25:08.510 "memory_domains": [ 00:25:08.510 { 00:25:08.510 "dma_device_id": "system", 00:25:08.510 "dma_device_type": 1 00:25:08.510 }, 00:25:08.510 { 00:25:08.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:08.510 "dma_device_type": 2 00:25:08.510 } 00:25:08.510 ], 00:25:08.510 "driver_specific": {} 00:25:08.510 } 00:25:08.510 ] 00:25:08.510 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.510 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:08.510 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:08.510 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:08.510 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:08.510 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:08.510 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:08.510 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:08.510 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:08.510 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:08.510 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:08.510 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:08.510 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:08.510 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.510 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:08.510 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:08.510 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.510 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:08.510 "name": "Existed_Raid", 00:25:08.510 "uuid": "7ff93063-21ec-4936-8211-6de297db51ce", 00:25:08.510 "strip_size_kb": 64, 00:25:08.510 "state": "configuring", 00:25:08.510 "raid_level": "concat", 00:25:08.510 "superblock": true, 00:25:08.510 "num_base_bdevs": 4, 00:25:08.510 "num_base_bdevs_discovered": 3, 00:25:08.510 "num_base_bdevs_operational": 4, 00:25:08.510 "base_bdevs_list": [ 00:25:08.510 { 00:25:08.510 "name": "BaseBdev1", 00:25:08.510 "uuid": "d15152ae-7bad-40d6-90e2-e8dca3ae191b", 00:25:08.510 "is_configured": true, 00:25:08.510 "data_offset": 2048, 00:25:08.510 "data_size": 63488 00:25:08.511 }, 00:25:08.511 { 00:25:08.511 "name": null, 00:25:08.511 "uuid": "ccaf1c3f-09fb-478a-b3d4-ef67786d43c3", 00:25:08.511 "is_configured": false, 00:25:08.511 "data_offset": 0, 00:25:08.511 "data_size": 63488 00:25:08.511 }, 00:25:08.511 { 00:25:08.511 "name": "BaseBdev3", 00:25:08.511 "uuid": "8348b0aa-3b66-45f9-aee8-006a2510ce11", 00:25:08.511 "is_configured": true, 00:25:08.511 "data_offset": 2048, 00:25:08.511 "data_size": 63488 00:25:08.511 }, 00:25:08.511 { 00:25:08.511 "name": "BaseBdev4", 00:25:08.511 "uuid": "0256c39e-affe-4088-8b5e-75eb41a796a2", 00:25:08.511 "is_configured": true, 00:25:08.511 "data_offset": 2048, 00:25:08.511 "data_size": 63488 00:25:08.511 } 00:25:08.511 ] 00:25:08.511 }' 00:25:08.511 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:08.511 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.076 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.076 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.076 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:09.076 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.076 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.076 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:25:09.076 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:25:09.076 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.076 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.076 [2024-11-20 13:45:11.765499] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:09.076 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.076 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:09.076 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:09.076 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:09.076 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:09.076 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:09.076 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:09.076 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:09.076 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:09.077 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:09.077 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:09.077 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:09.077 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.077 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.077 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.077 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.077 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:09.077 "name": "Existed_Raid", 00:25:09.077 "uuid": "7ff93063-21ec-4936-8211-6de297db51ce", 00:25:09.077 "strip_size_kb": 64, 00:25:09.077 "state": "configuring", 00:25:09.077 "raid_level": "concat", 00:25:09.077 "superblock": true, 00:25:09.077 "num_base_bdevs": 4, 00:25:09.077 "num_base_bdevs_discovered": 2, 00:25:09.077 "num_base_bdevs_operational": 4, 00:25:09.077 "base_bdevs_list": [ 00:25:09.077 { 00:25:09.077 "name": "BaseBdev1", 00:25:09.077 "uuid": "d15152ae-7bad-40d6-90e2-e8dca3ae191b", 00:25:09.077 "is_configured": true, 00:25:09.077 "data_offset": 2048, 00:25:09.077 "data_size": 63488 00:25:09.077 }, 00:25:09.077 { 00:25:09.077 "name": null, 00:25:09.077 "uuid": "ccaf1c3f-09fb-478a-b3d4-ef67786d43c3", 00:25:09.077 "is_configured": false, 00:25:09.077 "data_offset": 0, 00:25:09.077 "data_size": 63488 00:25:09.077 }, 00:25:09.077 { 00:25:09.077 "name": null, 00:25:09.077 "uuid": "8348b0aa-3b66-45f9-aee8-006a2510ce11", 00:25:09.077 "is_configured": false, 00:25:09.077 "data_offset": 0, 00:25:09.077 "data_size": 63488 00:25:09.077 }, 00:25:09.077 { 00:25:09.077 "name": "BaseBdev4", 00:25:09.077 "uuid": "0256c39e-affe-4088-8b5e-75eb41a796a2", 00:25:09.077 "is_configured": true, 00:25:09.077 "data_offset": 2048, 00:25:09.077 "data_size": 63488 00:25:09.077 } 00:25:09.077 ] 00:25:09.077 }' 00:25:09.077 13:45:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:09.077 13:45:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.643 [2024-11-20 13:45:12.345604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.643 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:09.643 "name": "Existed_Raid", 00:25:09.643 "uuid": "7ff93063-21ec-4936-8211-6de297db51ce", 00:25:09.643 "strip_size_kb": 64, 00:25:09.643 "state": "configuring", 00:25:09.643 "raid_level": "concat", 00:25:09.643 "superblock": true, 00:25:09.643 "num_base_bdevs": 4, 00:25:09.643 "num_base_bdevs_discovered": 3, 00:25:09.643 "num_base_bdevs_operational": 4, 00:25:09.643 "base_bdevs_list": [ 00:25:09.643 { 00:25:09.643 "name": "BaseBdev1", 00:25:09.643 "uuid": "d15152ae-7bad-40d6-90e2-e8dca3ae191b", 00:25:09.643 "is_configured": true, 00:25:09.643 "data_offset": 2048, 00:25:09.643 "data_size": 63488 00:25:09.643 }, 00:25:09.643 { 00:25:09.643 "name": null, 00:25:09.643 "uuid": "ccaf1c3f-09fb-478a-b3d4-ef67786d43c3", 00:25:09.643 "is_configured": false, 00:25:09.643 "data_offset": 0, 00:25:09.643 "data_size": 63488 00:25:09.643 }, 00:25:09.643 { 00:25:09.643 "name": "BaseBdev3", 00:25:09.643 "uuid": "8348b0aa-3b66-45f9-aee8-006a2510ce11", 00:25:09.643 "is_configured": true, 00:25:09.643 "data_offset": 2048, 00:25:09.643 "data_size": 63488 00:25:09.643 }, 00:25:09.643 { 00:25:09.643 "name": "BaseBdev4", 00:25:09.644 "uuid": "0256c39e-affe-4088-8b5e-75eb41a796a2", 00:25:09.644 "is_configured": true, 00:25:09.644 "data_offset": 2048, 00:25:09.644 "data_size": 63488 00:25:09.644 } 00:25:09.644 ] 00:25:09.644 }' 00:25:09.644 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:09.644 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:10.211 [2024-11-20 13:45:12.901775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:10.211 13:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:10.211 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.211 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:10.211 "name": "Existed_Raid", 00:25:10.211 "uuid": "7ff93063-21ec-4936-8211-6de297db51ce", 00:25:10.211 "strip_size_kb": 64, 00:25:10.211 "state": "configuring", 00:25:10.211 "raid_level": "concat", 00:25:10.211 "superblock": true, 00:25:10.211 "num_base_bdevs": 4, 00:25:10.211 "num_base_bdevs_discovered": 2, 00:25:10.211 "num_base_bdevs_operational": 4, 00:25:10.211 "base_bdevs_list": [ 00:25:10.211 { 00:25:10.211 "name": null, 00:25:10.211 "uuid": "d15152ae-7bad-40d6-90e2-e8dca3ae191b", 00:25:10.211 "is_configured": false, 00:25:10.211 "data_offset": 0, 00:25:10.211 "data_size": 63488 00:25:10.211 }, 00:25:10.211 { 00:25:10.211 "name": null, 00:25:10.211 "uuid": "ccaf1c3f-09fb-478a-b3d4-ef67786d43c3", 00:25:10.211 "is_configured": false, 00:25:10.211 "data_offset": 0, 00:25:10.211 "data_size": 63488 00:25:10.211 }, 00:25:10.211 { 00:25:10.211 "name": "BaseBdev3", 00:25:10.211 "uuid": "8348b0aa-3b66-45f9-aee8-006a2510ce11", 00:25:10.211 "is_configured": true, 00:25:10.211 "data_offset": 2048, 00:25:10.211 "data_size": 63488 00:25:10.211 }, 00:25:10.211 { 00:25:10.211 "name": "BaseBdev4", 00:25:10.211 "uuid": "0256c39e-affe-4088-8b5e-75eb41a796a2", 00:25:10.211 "is_configured": true, 00:25:10.211 "data_offset": 2048, 00:25:10.211 "data_size": 63488 00:25:10.211 } 00:25:10.211 ] 00:25:10.211 }' 00:25:10.211 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:10.211 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:10.808 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:10.808 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.808 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.808 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:10.808 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.808 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:25:10.808 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:10.808 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.808 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:10.808 [2024-11-20 13:45:13.552128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:10.808 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.808 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:25:10.808 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:10.808 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:10.808 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:10.808 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:10.808 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:10.809 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:10.809 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:10.809 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:10.809 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:10.809 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:10.809 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:10.809 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.809 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:10.809 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.809 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:10.809 "name": "Existed_Raid", 00:25:10.809 "uuid": "7ff93063-21ec-4936-8211-6de297db51ce", 00:25:10.809 "strip_size_kb": 64, 00:25:10.809 "state": "configuring", 00:25:10.809 "raid_level": "concat", 00:25:10.809 "superblock": true, 00:25:10.809 "num_base_bdevs": 4, 00:25:10.809 "num_base_bdevs_discovered": 3, 00:25:10.809 "num_base_bdevs_operational": 4, 00:25:10.809 "base_bdevs_list": [ 00:25:10.809 { 00:25:10.809 "name": null, 00:25:10.809 "uuid": "d15152ae-7bad-40d6-90e2-e8dca3ae191b", 00:25:10.809 "is_configured": false, 00:25:10.809 "data_offset": 0, 00:25:10.809 "data_size": 63488 00:25:10.809 }, 00:25:10.809 { 00:25:10.809 "name": "BaseBdev2", 00:25:10.809 "uuid": "ccaf1c3f-09fb-478a-b3d4-ef67786d43c3", 00:25:10.809 "is_configured": true, 00:25:10.809 "data_offset": 2048, 00:25:10.809 "data_size": 63488 00:25:10.809 }, 00:25:10.809 { 00:25:10.809 "name": "BaseBdev3", 00:25:10.809 "uuid": "8348b0aa-3b66-45f9-aee8-006a2510ce11", 00:25:10.809 "is_configured": true, 00:25:10.809 "data_offset": 2048, 00:25:10.809 "data_size": 63488 00:25:10.809 }, 00:25:10.809 { 00:25:10.809 "name": "BaseBdev4", 00:25:10.809 "uuid": "0256c39e-affe-4088-8b5e-75eb41a796a2", 00:25:10.809 "is_configured": true, 00:25:10.809 "data_offset": 2048, 00:25:10.809 "data_size": 63488 00:25:10.809 } 00:25:10.809 ] 00:25:10.809 }' 00:25:10.809 13:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:10.809 13:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d15152ae-7bad-40d6-90e2-e8dca3ae191b 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.375 [2024-11-20 13:45:14.178187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:11.375 [2024-11-20 13:45:14.178497] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:11.375 [2024-11-20 13:45:14.178516] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:11.375 NewBaseBdev 00:25:11.375 [2024-11-20 13:45:14.178844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:25:11.375 [2024-11-20 13:45:14.179039] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:11.375 [2024-11-20 13:45:14.179062] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:25:11.375 [2024-11-20 13:45:14.179233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.375 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.375 [ 00:25:11.375 { 00:25:11.375 "name": "NewBaseBdev", 00:25:11.375 "aliases": [ 00:25:11.375 "d15152ae-7bad-40d6-90e2-e8dca3ae191b" 00:25:11.375 ], 00:25:11.375 "product_name": "Malloc disk", 00:25:11.375 "block_size": 512, 00:25:11.375 "num_blocks": 65536, 00:25:11.375 "uuid": "d15152ae-7bad-40d6-90e2-e8dca3ae191b", 00:25:11.375 "assigned_rate_limits": { 00:25:11.375 "rw_ios_per_sec": 0, 00:25:11.375 "rw_mbytes_per_sec": 0, 00:25:11.375 "r_mbytes_per_sec": 0, 00:25:11.375 "w_mbytes_per_sec": 0 00:25:11.375 }, 00:25:11.375 "claimed": true, 00:25:11.375 "claim_type": "exclusive_write", 00:25:11.375 "zoned": false, 00:25:11.375 "supported_io_types": { 00:25:11.375 "read": true, 00:25:11.375 "write": true, 00:25:11.375 "unmap": true, 00:25:11.375 "flush": true, 00:25:11.375 "reset": true, 00:25:11.375 "nvme_admin": false, 00:25:11.376 "nvme_io": false, 00:25:11.376 "nvme_io_md": false, 00:25:11.376 "write_zeroes": true, 00:25:11.376 "zcopy": true, 00:25:11.376 "get_zone_info": false, 00:25:11.376 "zone_management": false, 00:25:11.376 "zone_append": false, 00:25:11.376 "compare": false, 00:25:11.376 "compare_and_write": false, 00:25:11.376 "abort": true, 00:25:11.376 "seek_hole": false, 00:25:11.376 "seek_data": false, 00:25:11.376 "copy": true, 00:25:11.376 "nvme_iov_md": false 00:25:11.376 }, 00:25:11.376 "memory_domains": [ 00:25:11.376 { 00:25:11.376 "dma_device_id": "system", 00:25:11.376 "dma_device_type": 1 00:25:11.376 }, 00:25:11.376 { 00:25:11.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.376 "dma_device_type": 2 00:25:11.376 } 00:25:11.376 ], 00:25:11.376 "driver_specific": {} 00:25:11.376 } 00:25:11.376 ] 00:25:11.376 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.376 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:11.376 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:25:11.376 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:11.376 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:11.376 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:11.376 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:11.376 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:11.376 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:11.376 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:11.376 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:11.376 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:11.376 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:11.376 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.376 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.376 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:11.376 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.376 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:11.376 "name": "Existed_Raid", 00:25:11.376 "uuid": "7ff93063-21ec-4936-8211-6de297db51ce", 00:25:11.376 "strip_size_kb": 64, 00:25:11.376 "state": "online", 00:25:11.376 "raid_level": "concat", 00:25:11.376 "superblock": true, 00:25:11.376 "num_base_bdevs": 4, 00:25:11.376 "num_base_bdevs_discovered": 4, 00:25:11.376 "num_base_bdevs_operational": 4, 00:25:11.376 "base_bdevs_list": [ 00:25:11.376 { 00:25:11.376 "name": "NewBaseBdev", 00:25:11.376 "uuid": "d15152ae-7bad-40d6-90e2-e8dca3ae191b", 00:25:11.376 "is_configured": true, 00:25:11.376 "data_offset": 2048, 00:25:11.376 "data_size": 63488 00:25:11.376 }, 00:25:11.376 { 00:25:11.376 "name": "BaseBdev2", 00:25:11.376 "uuid": "ccaf1c3f-09fb-478a-b3d4-ef67786d43c3", 00:25:11.376 "is_configured": true, 00:25:11.376 "data_offset": 2048, 00:25:11.376 "data_size": 63488 00:25:11.376 }, 00:25:11.376 { 00:25:11.376 "name": "BaseBdev3", 00:25:11.376 "uuid": "8348b0aa-3b66-45f9-aee8-006a2510ce11", 00:25:11.376 "is_configured": true, 00:25:11.376 "data_offset": 2048, 00:25:11.376 "data_size": 63488 00:25:11.376 }, 00:25:11.376 { 00:25:11.376 "name": "BaseBdev4", 00:25:11.376 "uuid": "0256c39e-affe-4088-8b5e-75eb41a796a2", 00:25:11.376 "is_configured": true, 00:25:11.376 "data_offset": 2048, 00:25:11.376 "data_size": 63488 00:25:11.376 } 00:25:11.376 ] 00:25:11.376 }' 00:25:11.376 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:11.376 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.947 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:25:11.947 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:11.947 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:11.947 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:11.947 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:25:11.947 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:11.947 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:11.947 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:11.947 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.948 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:11.948 [2024-11-20 13:45:14.738835] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:11.948 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.948 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:11.948 "name": "Existed_Raid", 00:25:11.948 "aliases": [ 00:25:11.948 "7ff93063-21ec-4936-8211-6de297db51ce" 00:25:11.948 ], 00:25:11.948 "product_name": "Raid Volume", 00:25:11.948 "block_size": 512, 00:25:11.948 "num_blocks": 253952, 00:25:11.948 "uuid": "7ff93063-21ec-4936-8211-6de297db51ce", 00:25:11.948 "assigned_rate_limits": { 00:25:11.948 "rw_ios_per_sec": 0, 00:25:11.948 "rw_mbytes_per_sec": 0, 00:25:11.948 "r_mbytes_per_sec": 0, 00:25:11.948 "w_mbytes_per_sec": 0 00:25:11.948 }, 00:25:11.948 "claimed": false, 00:25:11.948 "zoned": false, 00:25:11.948 "supported_io_types": { 00:25:11.948 "read": true, 00:25:11.948 "write": true, 00:25:11.948 "unmap": true, 00:25:11.948 "flush": true, 00:25:11.948 "reset": true, 00:25:11.948 "nvme_admin": false, 00:25:11.948 "nvme_io": false, 00:25:11.948 "nvme_io_md": false, 00:25:11.948 "write_zeroes": true, 00:25:11.948 "zcopy": false, 00:25:11.948 "get_zone_info": false, 00:25:11.948 "zone_management": false, 00:25:11.948 "zone_append": false, 00:25:11.948 "compare": false, 00:25:11.948 "compare_and_write": false, 00:25:11.948 "abort": false, 00:25:11.948 "seek_hole": false, 00:25:11.948 "seek_data": false, 00:25:11.948 "copy": false, 00:25:11.948 "nvme_iov_md": false 00:25:11.948 }, 00:25:11.948 "memory_domains": [ 00:25:11.948 { 00:25:11.948 "dma_device_id": "system", 00:25:11.948 "dma_device_type": 1 00:25:11.948 }, 00:25:11.948 { 00:25:11.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.948 "dma_device_type": 2 00:25:11.948 }, 00:25:11.948 { 00:25:11.948 "dma_device_id": "system", 00:25:11.948 "dma_device_type": 1 00:25:11.948 }, 00:25:11.948 { 00:25:11.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.948 "dma_device_type": 2 00:25:11.948 }, 00:25:11.948 { 00:25:11.948 "dma_device_id": "system", 00:25:11.948 "dma_device_type": 1 00:25:11.948 }, 00:25:11.948 { 00:25:11.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.948 "dma_device_type": 2 00:25:11.948 }, 00:25:11.948 { 00:25:11.948 "dma_device_id": "system", 00:25:11.948 "dma_device_type": 1 00:25:11.948 }, 00:25:11.948 { 00:25:11.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.948 "dma_device_type": 2 00:25:11.948 } 00:25:11.948 ], 00:25:11.948 "driver_specific": { 00:25:11.948 "raid": { 00:25:11.948 "uuid": "7ff93063-21ec-4936-8211-6de297db51ce", 00:25:11.948 "strip_size_kb": 64, 00:25:11.948 "state": "online", 00:25:11.948 "raid_level": "concat", 00:25:11.948 "superblock": true, 00:25:11.948 "num_base_bdevs": 4, 00:25:11.948 "num_base_bdevs_discovered": 4, 00:25:11.948 "num_base_bdevs_operational": 4, 00:25:11.948 "base_bdevs_list": [ 00:25:11.948 { 00:25:11.948 "name": "NewBaseBdev", 00:25:11.948 "uuid": "d15152ae-7bad-40d6-90e2-e8dca3ae191b", 00:25:11.948 "is_configured": true, 00:25:11.948 "data_offset": 2048, 00:25:11.948 "data_size": 63488 00:25:11.948 }, 00:25:11.948 { 00:25:11.948 "name": "BaseBdev2", 00:25:11.948 "uuid": "ccaf1c3f-09fb-478a-b3d4-ef67786d43c3", 00:25:11.948 "is_configured": true, 00:25:11.948 "data_offset": 2048, 00:25:11.948 "data_size": 63488 00:25:11.948 }, 00:25:11.948 { 00:25:11.948 "name": "BaseBdev3", 00:25:11.948 "uuid": "8348b0aa-3b66-45f9-aee8-006a2510ce11", 00:25:11.948 "is_configured": true, 00:25:11.948 "data_offset": 2048, 00:25:11.948 "data_size": 63488 00:25:11.948 }, 00:25:11.948 { 00:25:11.948 "name": "BaseBdev4", 00:25:11.948 "uuid": "0256c39e-affe-4088-8b5e-75eb41a796a2", 00:25:11.948 "is_configured": true, 00:25:11.948 "data_offset": 2048, 00:25:11.948 "data_size": 63488 00:25:11.948 } 00:25:11.948 ] 00:25:11.948 } 00:25:11.948 } 00:25:11.948 }' 00:25:11.948 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:11.948 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:25:11.948 BaseBdev2 00:25:11.948 BaseBdev3 00:25:11.948 BaseBdev4' 00:25:11.948 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:12.206 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:12.206 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:12.206 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:12.206 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:25:12.206 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.206 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.206 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.206 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:12.206 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:12.206 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:12.206 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:12.206 13:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:12.206 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.206 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.206 13:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.206 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:12.206 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:12.206 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:12.206 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:12.206 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:12.206 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.206 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.206 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.206 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:12.206 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:12.206 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:12.206 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:25:12.206 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.206 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.206 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:12.206 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.466 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:12.466 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:12.466 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:12.466 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.466 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:12.466 [2024-11-20 13:45:15.126527] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:12.466 [2024-11-20 13:45:15.126567] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:12.466 [2024-11-20 13:45:15.126668] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:12.466 [2024-11-20 13:45:15.126762] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:12.466 [2024-11-20 13:45:15.126779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:25:12.466 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.466 13:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72269 00:25:12.466 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72269 ']' 00:25:12.466 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72269 00:25:12.466 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:25:12.466 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:12.466 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72269 00:25:12.466 killing process with pid 72269 00:25:12.466 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:12.466 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:12.466 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72269' 00:25:12.466 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72269 00:25:12.466 [2024-11-20 13:45:15.164682] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:12.466 13:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72269 00:25:12.725 [2024-11-20 13:45:15.520848] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:13.661 ************************************ 00:25:13.661 END TEST raid_state_function_test_sb 00:25:13.661 ************************************ 00:25:13.661 13:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:25:13.661 00:25:13.661 real 0m13.225s 00:25:13.661 user 0m21.910s 00:25:13.661 sys 0m1.894s 00:25:13.661 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:13.661 13:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:13.920 13:45:16 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:25:13.920 13:45:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:13.920 13:45:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:13.920 13:45:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:13.920 ************************************ 00:25:13.920 START TEST raid_superblock_test 00:25:13.920 ************************************ 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72956 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72956 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72956 ']' 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.920 13:45:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.920 [2024-11-20 13:45:16.736059] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:25:13.920 [2024-11-20 13:45:16.736528] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72956 ] 00:25:14.178 [2024-11-20 13:45:16.928372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.178 [2024-11-20 13:45:17.077785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.436 [2024-11-20 13:45:17.280664] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:14.436 [2024-11-20 13:45:17.280974] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.002 malloc1 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.002 [2024-11-20 13:45:17.870888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:15.002 [2024-11-20 13:45:17.870975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:15.002 [2024-11-20 13:45:17.871008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:15.002 [2024-11-20 13:45:17.871024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:15.002 [2024-11-20 13:45:17.874046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:15.002 [2024-11-20 13:45:17.874092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:15.002 pt1 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.002 13:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.262 malloc2 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.262 [2024-11-20 13:45:17.919454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:15.262 [2024-11-20 13:45:17.919536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:15.262 [2024-11-20 13:45:17.919573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:15.262 [2024-11-20 13:45:17.919588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:15.262 [2024-11-20 13:45:17.922354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:15.262 [2024-11-20 13:45:17.922400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:15.262 pt2 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.262 malloc3 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.262 [2024-11-20 13:45:17.978250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:15.262 [2024-11-20 13:45:17.978320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:15.262 [2024-11-20 13:45:17.978353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:15.262 [2024-11-20 13:45:17.978368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:15.262 [2024-11-20 13:45:17.981144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:15.262 [2024-11-20 13:45:17.981192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:15.262 pt3 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.262 13:45:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.262 malloc4 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.262 [2024-11-20 13:45:18.027661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:15.262 [2024-11-20 13:45:18.027739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:15.262 [2024-11-20 13:45:18.027772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:25:15.262 [2024-11-20 13:45:18.027787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:15.262 [2024-11-20 13:45:18.030591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:15.262 [2024-11-20 13:45:18.030652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:15.262 pt4 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.262 [2024-11-20 13:45:18.035700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:15.262 [2024-11-20 13:45:18.038224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:15.262 [2024-11-20 13:45:18.038347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:15.262 [2024-11-20 13:45:18.038422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:15.262 [2024-11-20 13:45:18.038671] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:15.262 [2024-11-20 13:45:18.038690] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:15.262 [2024-11-20 13:45:18.039209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:15.262 [2024-11-20 13:45:18.039635] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:15.262 [2024-11-20 13:45:18.039768] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:15.262 [2024-11-20 13:45:18.040155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.262 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.263 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.263 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.263 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:15.263 "name": "raid_bdev1", 00:25:15.263 "uuid": "35ec1612-8453-4659-8cab-92d87502c5b0", 00:25:15.263 "strip_size_kb": 64, 00:25:15.263 "state": "online", 00:25:15.263 "raid_level": "concat", 00:25:15.263 "superblock": true, 00:25:15.263 "num_base_bdevs": 4, 00:25:15.263 "num_base_bdevs_discovered": 4, 00:25:15.263 "num_base_bdevs_operational": 4, 00:25:15.263 "base_bdevs_list": [ 00:25:15.263 { 00:25:15.263 "name": "pt1", 00:25:15.263 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:15.263 "is_configured": true, 00:25:15.263 "data_offset": 2048, 00:25:15.263 "data_size": 63488 00:25:15.263 }, 00:25:15.263 { 00:25:15.263 "name": "pt2", 00:25:15.263 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:15.263 "is_configured": true, 00:25:15.263 "data_offset": 2048, 00:25:15.263 "data_size": 63488 00:25:15.263 }, 00:25:15.263 { 00:25:15.263 "name": "pt3", 00:25:15.263 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:15.263 "is_configured": true, 00:25:15.263 "data_offset": 2048, 00:25:15.263 "data_size": 63488 00:25:15.263 }, 00:25:15.263 { 00:25:15.263 "name": "pt4", 00:25:15.263 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:15.263 "is_configured": true, 00:25:15.263 "data_offset": 2048, 00:25:15.263 "data_size": 63488 00:25:15.263 } 00:25:15.263 ] 00:25:15.263 }' 00:25:15.263 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:15.263 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.830 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:15.830 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:15.830 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:15.830 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:15.830 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:15.830 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:15.830 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:15.830 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:15.830 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.830 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.830 [2024-11-20 13:45:18.556663] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:15.830 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.830 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:15.830 "name": "raid_bdev1", 00:25:15.830 "aliases": [ 00:25:15.830 "35ec1612-8453-4659-8cab-92d87502c5b0" 00:25:15.830 ], 00:25:15.830 "product_name": "Raid Volume", 00:25:15.830 "block_size": 512, 00:25:15.830 "num_blocks": 253952, 00:25:15.830 "uuid": "35ec1612-8453-4659-8cab-92d87502c5b0", 00:25:15.830 "assigned_rate_limits": { 00:25:15.830 "rw_ios_per_sec": 0, 00:25:15.830 "rw_mbytes_per_sec": 0, 00:25:15.830 "r_mbytes_per_sec": 0, 00:25:15.830 "w_mbytes_per_sec": 0 00:25:15.830 }, 00:25:15.830 "claimed": false, 00:25:15.830 "zoned": false, 00:25:15.830 "supported_io_types": { 00:25:15.830 "read": true, 00:25:15.830 "write": true, 00:25:15.830 "unmap": true, 00:25:15.830 "flush": true, 00:25:15.830 "reset": true, 00:25:15.830 "nvme_admin": false, 00:25:15.830 "nvme_io": false, 00:25:15.830 "nvme_io_md": false, 00:25:15.830 "write_zeroes": true, 00:25:15.830 "zcopy": false, 00:25:15.830 "get_zone_info": false, 00:25:15.830 "zone_management": false, 00:25:15.830 "zone_append": false, 00:25:15.830 "compare": false, 00:25:15.830 "compare_and_write": false, 00:25:15.830 "abort": false, 00:25:15.830 "seek_hole": false, 00:25:15.830 "seek_data": false, 00:25:15.830 "copy": false, 00:25:15.830 "nvme_iov_md": false 00:25:15.830 }, 00:25:15.830 "memory_domains": [ 00:25:15.830 { 00:25:15.830 "dma_device_id": "system", 00:25:15.830 "dma_device_type": 1 00:25:15.830 }, 00:25:15.830 { 00:25:15.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:15.830 "dma_device_type": 2 00:25:15.830 }, 00:25:15.830 { 00:25:15.830 "dma_device_id": "system", 00:25:15.830 "dma_device_type": 1 00:25:15.830 }, 00:25:15.830 { 00:25:15.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:15.830 "dma_device_type": 2 00:25:15.830 }, 00:25:15.830 { 00:25:15.830 "dma_device_id": "system", 00:25:15.830 "dma_device_type": 1 00:25:15.830 }, 00:25:15.830 { 00:25:15.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:15.830 "dma_device_type": 2 00:25:15.830 }, 00:25:15.830 { 00:25:15.830 "dma_device_id": "system", 00:25:15.830 "dma_device_type": 1 00:25:15.830 }, 00:25:15.830 { 00:25:15.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:15.831 "dma_device_type": 2 00:25:15.831 } 00:25:15.831 ], 00:25:15.831 "driver_specific": { 00:25:15.831 "raid": { 00:25:15.831 "uuid": "35ec1612-8453-4659-8cab-92d87502c5b0", 00:25:15.831 "strip_size_kb": 64, 00:25:15.831 "state": "online", 00:25:15.831 "raid_level": "concat", 00:25:15.831 "superblock": true, 00:25:15.831 "num_base_bdevs": 4, 00:25:15.831 "num_base_bdevs_discovered": 4, 00:25:15.831 "num_base_bdevs_operational": 4, 00:25:15.831 "base_bdevs_list": [ 00:25:15.831 { 00:25:15.831 "name": "pt1", 00:25:15.831 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:15.831 "is_configured": true, 00:25:15.831 "data_offset": 2048, 00:25:15.831 "data_size": 63488 00:25:15.831 }, 00:25:15.831 { 00:25:15.831 "name": "pt2", 00:25:15.831 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:15.831 "is_configured": true, 00:25:15.831 "data_offset": 2048, 00:25:15.831 "data_size": 63488 00:25:15.831 }, 00:25:15.831 { 00:25:15.831 "name": "pt3", 00:25:15.831 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:15.831 "is_configured": true, 00:25:15.831 "data_offset": 2048, 00:25:15.831 "data_size": 63488 00:25:15.831 }, 00:25:15.831 { 00:25:15.831 "name": "pt4", 00:25:15.831 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:15.831 "is_configured": true, 00:25:15.831 "data_offset": 2048, 00:25:15.831 "data_size": 63488 00:25:15.831 } 00:25:15.831 ] 00:25:15.831 } 00:25:15.831 } 00:25:15.831 }' 00:25:15.831 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:15.831 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:15.831 pt2 00:25:15.831 pt3 00:25:15.831 pt4' 00:25:15.831 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:15.831 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:15.831 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:15.831 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:15.831 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:15.831 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.831 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.831 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:16.090 [2024-11-20 13:45:18.932970] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=35ec1612-8453-4659-8cab-92d87502c5b0 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 35ec1612-8453-4659-8cab-92d87502c5b0 ']' 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.090 [2024-11-20 13:45:18.984592] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:16.090 [2024-11-20 13:45:18.984747] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:16.090 [2024-11-20 13:45:18.984989] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:16.090 [2024-11-20 13:45:18.985192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:16.090 [2024-11-20 13:45:18.985353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.090 13:45:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:16.350 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.351 [2024-11-20 13:45:19.128658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:16.351 [2024-11-20 13:45:19.131314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:16.351 [2024-11-20 13:45:19.131505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:16.351 [2024-11-20 13:45:19.131720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:25:16.351 [2024-11-20 13:45:19.131940] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:16.351 [2024-11-20 13:45:19.132167] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:16.351 [2024-11-20 13:45:19.132386] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:25:16.351 [2024-11-20 13:45:19.132560] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:25:16.351 [2024-11-20 13:45:19.132713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:16.351 [2024-11-20 13:45:19.132764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:25:16.351 request: 00:25:16.351 { 00:25:16.351 "name": "raid_bdev1", 00:25:16.351 "raid_level": "concat", 00:25:16.351 "base_bdevs": [ 00:25:16.351 "malloc1", 00:25:16.351 "malloc2", 00:25:16.351 "malloc3", 00:25:16.351 "malloc4" 00:25:16.351 ], 00:25:16.351 "strip_size_kb": 64, 00:25:16.351 "superblock": false, 00:25:16.351 "method": "bdev_raid_create", 00:25:16.351 "req_id": 1 00:25:16.351 } 00:25:16.351 Got JSON-RPC error response 00:25:16.351 response: 00:25:16.351 { 00:25:16.351 "code": -17, 00:25:16.351 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:16.351 } 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.351 [2024-11-20 13:45:19.201240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:16.351 [2024-11-20 13:45:19.201326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:16.351 [2024-11-20 13:45:19.201358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:16.351 [2024-11-20 13:45:19.201376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:16.351 [2024-11-20 13:45:19.204323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:16.351 [2024-11-20 13:45:19.204376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:16.351 [2024-11-20 13:45:19.204486] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:16.351 [2024-11-20 13:45:19.204564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:16.351 pt1 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:16.351 "name": "raid_bdev1", 00:25:16.351 "uuid": "35ec1612-8453-4659-8cab-92d87502c5b0", 00:25:16.351 "strip_size_kb": 64, 00:25:16.351 "state": "configuring", 00:25:16.351 "raid_level": "concat", 00:25:16.351 "superblock": true, 00:25:16.351 "num_base_bdevs": 4, 00:25:16.351 "num_base_bdevs_discovered": 1, 00:25:16.351 "num_base_bdevs_operational": 4, 00:25:16.351 "base_bdevs_list": [ 00:25:16.351 { 00:25:16.351 "name": "pt1", 00:25:16.351 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:16.351 "is_configured": true, 00:25:16.351 "data_offset": 2048, 00:25:16.351 "data_size": 63488 00:25:16.351 }, 00:25:16.351 { 00:25:16.351 "name": null, 00:25:16.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:16.351 "is_configured": false, 00:25:16.351 "data_offset": 2048, 00:25:16.351 "data_size": 63488 00:25:16.351 }, 00:25:16.351 { 00:25:16.351 "name": null, 00:25:16.351 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:16.351 "is_configured": false, 00:25:16.351 "data_offset": 2048, 00:25:16.351 "data_size": 63488 00:25:16.351 }, 00:25:16.351 { 00:25:16.351 "name": null, 00:25:16.351 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:16.351 "is_configured": false, 00:25:16.351 "data_offset": 2048, 00:25:16.351 "data_size": 63488 00:25:16.351 } 00:25:16.351 ] 00:25:16.351 }' 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:16.351 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.918 [2024-11-20 13:45:19.753430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:16.918 [2024-11-20 13:45:19.753556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:16.918 [2024-11-20 13:45:19.753587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:25:16.918 [2024-11-20 13:45:19.753604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:16.918 [2024-11-20 13:45:19.754262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:16.918 [2024-11-20 13:45:19.754312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:16.918 [2024-11-20 13:45:19.754418] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:16.918 [2024-11-20 13:45:19.754456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:16.918 pt2 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.918 [2024-11-20 13:45:19.761409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:16.918 "name": "raid_bdev1", 00:25:16.918 "uuid": "35ec1612-8453-4659-8cab-92d87502c5b0", 00:25:16.918 "strip_size_kb": 64, 00:25:16.918 "state": "configuring", 00:25:16.918 "raid_level": "concat", 00:25:16.918 "superblock": true, 00:25:16.918 "num_base_bdevs": 4, 00:25:16.918 "num_base_bdevs_discovered": 1, 00:25:16.918 "num_base_bdevs_operational": 4, 00:25:16.918 "base_bdevs_list": [ 00:25:16.918 { 00:25:16.918 "name": "pt1", 00:25:16.918 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:16.918 "is_configured": true, 00:25:16.918 "data_offset": 2048, 00:25:16.918 "data_size": 63488 00:25:16.918 }, 00:25:16.918 { 00:25:16.918 "name": null, 00:25:16.918 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:16.918 "is_configured": false, 00:25:16.918 "data_offset": 0, 00:25:16.918 "data_size": 63488 00:25:16.918 }, 00:25:16.918 { 00:25:16.918 "name": null, 00:25:16.918 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:16.918 "is_configured": false, 00:25:16.918 "data_offset": 2048, 00:25:16.918 "data_size": 63488 00:25:16.918 }, 00:25:16.918 { 00:25:16.918 "name": null, 00:25:16.918 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:16.918 "is_configured": false, 00:25:16.918 "data_offset": 2048, 00:25:16.918 "data_size": 63488 00:25:16.918 } 00:25:16.918 ] 00:25:16.918 }' 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:16.918 13:45:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.486 [2024-11-20 13:45:20.305681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:17.486 [2024-11-20 13:45:20.305765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:17.486 [2024-11-20 13:45:20.305798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:25:17.486 [2024-11-20 13:45:20.305812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:17.486 [2024-11-20 13:45:20.306397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:17.486 [2024-11-20 13:45:20.306441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:17.486 [2024-11-20 13:45:20.306552] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:17.486 [2024-11-20 13:45:20.306585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:17.486 pt2 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.486 [2024-11-20 13:45:20.313632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:17.486 [2024-11-20 13:45:20.313688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:17.486 [2024-11-20 13:45:20.313721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:17.486 [2024-11-20 13:45:20.313735] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:17.486 [2024-11-20 13:45:20.314220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:17.486 [2024-11-20 13:45:20.314255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:17.486 [2024-11-20 13:45:20.314336] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:17.486 [2024-11-20 13:45:20.314371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:17.486 pt3 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.486 [2024-11-20 13:45:20.325610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:17.486 [2024-11-20 13:45:20.325665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:17.486 [2024-11-20 13:45:20.325691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:17.486 [2024-11-20 13:45:20.325704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:17.486 [2024-11-20 13:45:20.326220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:17.486 [2024-11-20 13:45:20.326251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:17.486 [2024-11-20 13:45:20.326335] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:25:17.486 [2024-11-20 13:45:20.326374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:17.486 [2024-11-20 13:45:20.326543] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:17.486 [2024-11-20 13:45:20.326559] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:17.486 [2024-11-20 13:45:20.326869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:17.486 [2024-11-20 13:45:20.327088] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:17.486 [2024-11-20 13:45:20.327112] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:25:17.486 [2024-11-20 13:45:20.327283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:17.486 pt4 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:17.486 "name": "raid_bdev1", 00:25:17.486 "uuid": "35ec1612-8453-4659-8cab-92d87502c5b0", 00:25:17.486 "strip_size_kb": 64, 00:25:17.486 "state": "online", 00:25:17.486 "raid_level": "concat", 00:25:17.486 "superblock": true, 00:25:17.486 "num_base_bdevs": 4, 00:25:17.486 "num_base_bdevs_discovered": 4, 00:25:17.486 "num_base_bdevs_operational": 4, 00:25:17.486 "base_bdevs_list": [ 00:25:17.486 { 00:25:17.486 "name": "pt1", 00:25:17.486 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:17.486 "is_configured": true, 00:25:17.486 "data_offset": 2048, 00:25:17.486 "data_size": 63488 00:25:17.486 }, 00:25:17.486 { 00:25:17.486 "name": "pt2", 00:25:17.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:17.486 "is_configured": true, 00:25:17.486 "data_offset": 2048, 00:25:17.486 "data_size": 63488 00:25:17.486 }, 00:25:17.486 { 00:25:17.486 "name": "pt3", 00:25:17.486 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:17.486 "is_configured": true, 00:25:17.486 "data_offset": 2048, 00:25:17.486 "data_size": 63488 00:25:17.486 }, 00:25:17.486 { 00:25:17.486 "name": "pt4", 00:25:17.486 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:17.486 "is_configured": true, 00:25:17.486 "data_offset": 2048, 00:25:17.486 "data_size": 63488 00:25:17.486 } 00:25:17.486 ] 00:25:17.486 }' 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:17.486 13:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.053 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:25:18.053 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:18.053 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:18.053 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:18.053 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:18.053 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:18.053 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:18.053 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:18.053 13:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.053 13:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.053 [2024-11-20 13:45:20.878311] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:18.053 13:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.053 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:18.053 "name": "raid_bdev1", 00:25:18.053 "aliases": [ 00:25:18.053 "35ec1612-8453-4659-8cab-92d87502c5b0" 00:25:18.053 ], 00:25:18.053 "product_name": "Raid Volume", 00:25:18.053 "block_size": 512, 00:25:18.053 "num_blocks": 253952, 00:25:18.053 "uuid": "35ec1612-8453-4659-8cab-92d87502c5b0", 00:25:18.053 "assigned_rate_limits": { 00:25:18.053 "rw_ios_per_sec": 0, 00:25:18.053 "rw_mbytes_per_sec": 0, 00:25:18.053 "r_mbytes_per_sec": 0, 00:25:18.053 "w_mbytes_per_sec": 0 00:25:18.053 }, 00:25:18.053 "claimed": false, 00:25:18.053 "zoned": false, 00:25:18.053 "supported_io_types": { 00:25:18.053 "read": true, 00:25:18.053 "write": true, 00:25:18.053 "unmap": true, 00:25:18.053 "flush": true, 00:25:18.053 "reset": true, 00:25:18.053 "nvme_admin": false, 00:25:18.053 "nvme_io": false, 00:25:18.053 "nvme_io_md": false, 00:25:18.053 "write_zeroes": true, 00:25:18.053 "zcopy": false, 00:25:18.053 "get_zone_info": false, 00:25:18.053 "zone_management": false, 00:25:18.053 "zone_append": false, 00:25:18.053 "compare": false, 00:25:18.053 "compare_and_write": false, 00:25:18.053 "abort": false, 00:25:18.053 "seek_hole": false, 00:25:18.053 "seek_data": false, 00:25:18.053 "copy": false, 00:25:18.053 "nvme_iov_md": false 00:25:18.053 }, 00:25:18.053 "memory_domains": [ 00:25:18.053 { 00:25:18.053 "dma_device_id": "system", 00:25:18.053 "dma_device_type": 1 00:25:18.053 }, 00:25:18.053 { 00:25:18.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.053 "dma_device_type": 2 00:25:18.053 }, 00:25:18.053 { 00:25:18.053 "dma_device_id": "system", 00:25:18.053 "dma_device_type": 1 00:25:18.053 }, 00:25:18.053 { 00:25:18.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.053 "dma_device_type": 2 00:25:18.053 }, 00:25:18.053 { 00:25:18.053 "dma_device_id": "system", 00:25:18.053 "dma_device_type": 1 00:25:18.053 }, 00:25:18.053 { 00:25:18.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.053 "dma_device_type": 2 00:25:18.053 }, 00:25:18.053 { 00:25:18.053 "dma_device_id": "system", 00:25:18.053 "dma_device_type": 1 00:25:18.053 }, 00:25:18.053 { 00:25:18.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.053 "dma_device_type": 2 00:25:18.053 } 00:25:18.053 ], 00:25:18.053 "driver_specific": { 00:25:18.053 "raid": { 00:25:18.053 "uuid": "35ec1612-8453-4659-8cab-92d87502c5b0", 00:25:18.053 "strip_size_kb": 64, 00:25:18.053 "state": "online", 00:25:18.053 "raid_level": "concat", 00:25:18.053 "superblock": true, 00:25:18.053 "num_base_bdevs": 4, 00:25:18.053 "num_base_bdevs_discovered": 4, 00:25:18.053 "num_base_bdevs_operational": 4, 00:25:18.053 "base_bdevs_list": [ 00:25:18.053 { 00:25:18.053 "name": "pt1", 00:25:18.053 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:18.053 "is_configured": true, 00:25:18.053 "data_offset": 2048, 00:25:18.053 "data_size": 63488 00:25:18.053 }, 00:25:18.053 { 00:25:18.053 "name": "pt2", 00:25:18.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:18.053 "is_configured": true, 00:25:18.053 "data_offset": 2048, 00:25:18.053 "data_size": 63488 00:25:18.053 }, 00:25:18.053 { 00:25:18.053 "name": "pt3", 00:25:18.053 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:18.053 "is_configured": true, 00:25:18.053 "data_offset": 2048, 00:25:18.053 "data_size": 63488 00:25:18.053 }, 00:25:18.053 { 00:25:18.053 "name": "pt4", 00:25:18.053 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:18.053 "is_configured": true, 00:25:18.053 "data_offset": 2048, 00:25:18.053 "data_size": 63488 00:25:18.053 } 00:25:18.053 ] 00:25:18.053 } 00:25:18.053 } 00:25:18.053 }' 00:25:18.053 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:18.312 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:18.312 pt2 00:25:18.312 pt3 00:25:18.312 pt4' 00:25:18.312 13:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.312 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.571 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:18.571 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:18.571 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:18.571 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:25:18.571 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.571 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.571 [2024-11-20 13:45:21.274454] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:18.571 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.571 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 35ec1612-8453-4659-8cab-92d87502c5b0 '!=' 35ec1612-8453-4659-8cab-92d87502c5b0 ']' 00:25:18.571 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:25:18.571 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:18.571 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:18.571 13:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72956 00:25:18.571 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72956 ']' 00:25:18.571 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72956 00:25:18.571 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:25:18.571 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:18.571 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72956 00:25:18.571 killing process with pid 72956 00:25:18.571 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:18.571 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:18.571 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72956' 00:25:18.571 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72956 00:25:18.571 [2024-11-20 13:45:21.364697] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:18.571 13:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72956 00:25:18.571 [2024-11-20 13:45:21.364833] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:18.571 [2024-11-20 13:45:21.364992] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:18.571 [2024-11-20 13:45:21.365014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:25:18.829 [2024-11-20 13:45:21.741642] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:20.204 13:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:25:20.204 00:25:20.204 real 0m6.209s 00:25:20.204 user 0m9.370s 00:25:20.204 sys 0m0.930s 00:25:20.204 13:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:20.204 ************************************ 00:25:20.204 END TEST raid_superblock_test 00:25:20.204 ************************************ 00:25:20.204 13:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.204 13:45:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:25:20.204 13:45:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:20.204 13:45:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:20.204 13:45:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:20.204 ************************************ 00:25:20.204 START TEST raid_read_error_test 00:25:20.204 ************************************ 00:25:20.204 13:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:25:20.204 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:25:20.204 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:25:20.204 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:25:20.204 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:25:20.204 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:20.204 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:25:20.204 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.akBfhljuTK 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73229 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73229 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73229 ']' 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:20.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:20.205 13:45:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.205 [2024-11-20 13:45:23.020700] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:25:20.205 [2024-11-20 13:45:23.020971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73229 ] 00:25:20.463 [2024-11-20 13:45:23.209523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.463 [2024-11-20 13:45:23.353590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.722 [2024-11-20 13:45:23.571845] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:20.722 [2024-11-20 13:45:23.571935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.289 BaseBdev1_malloc 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.289 true 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.289 [2024-11-20 13:45:24.135907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:21.289 [2024-11-20 13:45:24.135982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:21.289 [2024-11-20 13:45:24.136013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:21.289 [2024-11-20 13:45:24.136031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:21.289 [2024-11-20 13:45:24.138867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:21.289 [2024-11-20 13:45:24.139088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:21.289 BaseBdev1 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.289 BaseBdev2_malloc 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.289 true 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.289 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.549 [2024-11-20 13:45:24.204918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:21.549 [2024-11-20 13:45:24.205123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:21.549 [2024-11-20 13:45:24.205159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:21.549 [2024-11-20 13:45:24.205178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:21.549 [2024-11-20 13:45:24.207956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:21.549 [2024-11-20 13:45:24.208008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:21.549 BaseBdev2 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.549 BaseBdev3_malloc 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.549 true 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.549 [2024-11-20 13:45:24.282764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:21.549 [2024-11-20 13:45:24.282990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:21.549 [2024-11-20 13:45:24.283028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:21.549 [2024-11-20 13:45:24.283048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:21.549 [2024-11-20 13:45:24.285821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:21.549 [2024-11-20 13:45:24.285874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:21.549 BaseBdev3 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.549 BaseBdev4_malloc 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.549 true 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.549 [2024-11-20 13:45:24.350740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:25:21.549 [2024-11-20 13:45:24.350958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:21.549 [2024-11-20 13:45:24.351002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:21.549 [2024-11-20 13:45:24.351021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:21.549 [2024-11-20 13:45:24.353799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:21.549 [2024-11-20 13:45:24.353978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:21.549 BaseBdev4 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.549 [2024-11-20 13:45:24.362952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:21.549 [2024-11-20 13:45:24.365363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:21.549 [2024-11-20 13:45:24.365596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:21.549 [2024-11-20 13:45:24.365714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:21.549 [2024-11-20 13:45:24.366057] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:25:21.549 [2024-11-20 13:45:24.366085] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:21.549 [2024-11-20 13:45:24.366420] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:25:21.549 [2024-11-20 13:45:24.366662] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:25:21.549 [2024-11-20 13:45:24.366684] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:25:21.549 [2024-11-20 13:45:24.366940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.549 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:21.549 "name": "raid_bdev1", 00:25:21.549 "uuid": "a03a3eab-62a9-42f1-ae59-d1653eaddb9d", 00:25:21.549 "strip_size_kb": 64, 00:25:21.549 "state": "online", 00:25:21.549 "raid_level": "concat", 00:25:21.549 "superblock": true, 00:25:21.549 "num_base_bdevs": 4, 00:25:21.549 "num_base_bdevs_discovered": 4, 00:25:21.549 "num_base_bdevs_operational": 4, 00:25:21.549 "base_bdevs_list": [ 00:25:21.549 { 00:25:21.549 "name": "BaseBdev1", 00:25:21.549 "uuid": "7b66e773-b0d0-5c5b-8b5b-3261ffee5e50", 00:25:21.549 "is_configured": true, 00:25:21.549 "data_offset": 2048, 00:25:21.549 "data_size": 63488 00:25:21.549 }, 00:25:21.549 { 00:25:21.549 "name": "BaseBdev2", 00:25:21.549 "uuid": "9f069303-5e98-5399-a702-6a85f36d87e3", 00:25:21.549 "is_configured": true, 00:25:21.549 "data_offset": 2048, 00:25:21.549 "data_size": 63488 00:25:21.549 }, 00:25:21.549 { 00:25:21.549 "name": "BaseBdev3", 00:25:21.550 "uuid": "d1d7ee17-9369-5a52-81e0-9f8f8d15c08d", 00:25:21.550 "is_configured": true, 00:25:21.550 "data_offset": 2048, 00:25:21.550 "data_size": 63488 00:25:21.550 }, 00:25:21.550 { 00:25:21.550 "name": "BaseBdev4", 00:25:21.550 "uuid": "76dd9b31-4de8-5ce3-bcc3-f0253ef867bb", 00:25:21.550 "is_configured": true, 00:25:21.550 "data_offset": 2048, 00:25:21.550 "data_size": 63488 00:25:21.550 } 00:25:21.550 ] 00:25:21.550 }' 00:25:21.550 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:21.550 13:45:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.116 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:25:22.116 13:45:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:22.116 [2024-11-20 13:45:25.028519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:25:23.052 13:45:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:25:23.052 13:45:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.052 13:45:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.052 13:45:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.052 13:45:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:25:23.052 13:45:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:25:23.052 13:45:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:25:23.052 13:45:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:23.052 13:45:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:23.052 13:45:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:23.052 13:45:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:23.052 13:45:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:23.052 13:45:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:23.052 13:45:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:23.052 13:45:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:23.052 13:45:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:23.052 13:45:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:23.052 13:45:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.052 13:45:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:23.052 13:45:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.052 13:45:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.052 13:45:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.318 13:45:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:23.318 "name": "raid_bdev1", 00:25:23.318 "uuid": "a03a3eab-62a9-42f1-ae59-d1653eaddb9d", 00:25:23.318 "strip_size_kb": 64, 00:25:23.318 "state": "online", 00:25:23.318 "raid_level": "concat", 00:25:23.318 "superblock": true, 00:25:23.318 "num_base_bdevs": 4, 00:25:23.318 "num_base_bdevs_discovered": 4, 00:25:23.318 "num_base_bdevs_operational": 4, 00:25:23.318 "base_bdevs_list": [ 00:25:23.318 { 00:25:23.318 "name": "BaseBdev1", 00:25:23.318 "uuid": "7b66e773-b0d0-5c5b-8b5b-3261ffee5e50", 00:25:23.318 "is_configured": true, 00:25:23.318 "data_offset": 2048, 00:25:23.318 "data_size": 63488 00:25:23.318 }, 00:25:23.318 { 00:25:23.318 "name": "BaseBdev2", 00:25:23.318 "uuid": "9f069303-5e98-5399-a702-6a85f36d87e3", 00:25:23.318 "is_configured": true, 00:25:23.318 "data_offset": 2048, 00:25:23.318 "data_size": 63488 00:25:23.318 }, 00:25:23.318 { 00:25:23.318 "name": "BaseBdev3", 00:25:23.318 "uuid": "d1d7ee17-9369-5a52-81e0-9f8f8d15c08d", 00:25:23.318 "is_configured": true, 00:25:23.318 "data_offset": 2048, 00:25:23.318 "data_size": 63488 00:25:23.318 }, 00:25:23.318 { 00:25:23.318 "name": "BaseBdev4", 00:25:23.318 "uuid": "76dd9b31-4de8-5ce3-bcc3-f0253ef867bb", 00:25:23.318 "is_configured": true, 00:25:23.318 "data_offset": 2048, 00:25:23.318 "data_size": 63488 00:25:23.318 } 00:25:23.318 ] 00:25:23.318 }' 00:25:23.318 13:45:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:23.318 13:45:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.578 13:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:23.578 13:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.578 13:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.578 [2024-11-20 13:45:26.436480] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:23.578 [2024-11-20 13:45:26.436520] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:23.578 [2024-11-20 13:45:26.439992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:23.578 [2024-11-20 13:45:26.440088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:23.578 [2024-11-20 13:45:26.440153] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:23.578 [2024-11-20 13:45:26.440176] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:25:23.578 { 00:25:23.578 "results": [ 00:25:23.578 { 00:25:23.578 "job": "raid_bdev1", 00:25:23.578 "core_mask": "0x1", 00:25:23.578 "workload": "randrw", 00:25:23.578 "percentage": 50, 00:25:23.578 "status": "finished", 00:25:23.578 "queue_depth": 1, 00:25:23.578 "io_size": 131072, 00:25:23.578 "runtime": 1.405337, 00:25:23.578 "iops": 10394.659786229211, 00:25:23.578 "mibps": 1299.3324732786514, 00:25:23.578 "io_failed": 1, 00:25:23.578 "io_timeout": 0, 00:25:23.578 "avg_latency_us": 133.97888773421118, 00:25:23.578 "min_latency_us": 43.985454545454544, 00:25:23.578 "max_latency_us": 1832.0290909090909 00:25:23.578 } 00:25:23.578 ], 00:25:23.578 "core_count": 1 00:25:23.578 } 00:25:23.578 13:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.578 13:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73229 00:25:23.578 13:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73229 ']' 00:25:23.578 13:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73229 00:25:23.578 13:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:25:23.578 13:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:23.578 13:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73229 00:25:23.578 killing process with pid 73229 00:25:23.578 13:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:23.578 13:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:23.578 13:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73229' 00:25:23.578 13:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73229 00:25:23.578 [2024-11-20 13:45:26.478907] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:23.578 13:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73229 00:25:24.145 [2024-11-20 13:45:26.772873] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:25.080 13:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.akBfhljuTK 00:25:25.080 13:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:25:25.080 13:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:25:25.080 13:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:25:25.080 13:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:25:25.080 13:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:25.080 13:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:25.080 13:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:25:25.080 ************************************ 00:25:25.080 END TEST raid_read_error_test 00:25:25.080 ************************************ 00:25:25.080 00:25:25.080 real 0m5.004s 00:25:25.080 user 0m6.199s 00:25:25.080 sys 0m0.647s 00:25:25.080 13:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:25.080 13:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.080 13:45:27 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:25:25.080 13:45:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:25.080 13:45:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:25.080 13:45:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:25.080 ************************************ 00:25:25.080 START TEST raid_write_error_test 00:25:25.080 ************************************ 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GlNLgkDLjK 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73375 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73375 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73375 ']' 00:25:25.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:25.080 13:45:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.337 [2024-11-20 13:45:28.072769] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:25:25.337 [2024-11-20 13:45:28.072975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73375 ] 00:25:25.595 [2024-11-20 13:45:28.263076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.595 [2024-11-20 13:45:28.425657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.852 [2024-11-20 13:45:28.650224] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:25.852 [2024-11-20 13:45:28.650302] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.416 BaseBdev1_malloc 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.416 true 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.416 [2024-11-20 13:45:29.129448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:26.416 [2024-11-20 13:45:29.129665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.416 [2024-11-20 13:45:29.129704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:26.416 [2024-11-20 13:45:29.129723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.416 [2024-11-20 13:45:29.133183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.416 BaseBdev1 00:25:26.416 [2024-11-20 13:45:29.133356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.416 BaseBdev2_malloc 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.416 true 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.416 [2024-11-20 13:45:29.187277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:26.416 [2024-11-20 13:45:29.187350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.416 [2024-11-20 13:45:29.187375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:26.416 [2024-11-20 13:45:29.187391] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.416 [2024-11-20 13:45:29.190149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.416 [2024-11-20 13:45:29.190198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:26.416 BaseBdev2 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.416 BaseBdev3_malloc 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.416 true 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.416 [2024-11-20 13:45:29.250421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:26.416 [2024-11-20 13:45:29.250491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.416 [2024-11-20 13:45:29.250518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:26.416 [2024-11-20 13:45:29.250536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.416 [2024-11-20 13:45:29.253348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.416 [2024-11-20 13:45:29.253399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:26.416 BaseBdev3 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.416 BaseBdev4_malloc 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.416 true 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.416 [2024-11-20 13:45:29.306310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:25:26.416 [2024-11-20 13:45:29.306380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.416 [2024-11-20 13:45:29.306408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:26.416 [2024-11-20 13:45:29.306425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.416 [2024-11-20 13:45:29.309185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.416 [2024-11-20 13:45:29.309238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:26.416 BaseBdev4 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.416 [2024-11-20 13:45:29.314451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:26.416 [2024-11-20 13:45:29.316868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:26.416 [2024-11-20 13:45:29.317124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:26.416 [2024-11-20 13:45:29.317236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:26.416 [2024-11-20 13:45:29.317527] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:25:26.416 [2024-11-20 13:45:29.317549] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:26.416 [2024-11-20 13:45:29.317862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:25:26.416 [2024-11-20 13:45:29.318098] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:25:26.416 [2024-11-20 13:45:29.318123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:25:26.416 [2024-11-20 13:45:29.318361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.416 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.673 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.673 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:26.673 "name": "raid_bdev1", 00:25:26.673 "uuid": "8e0174c1-1fdf-49d1-a3b8-5e05b9943c25", 00:25:26.673 "strip_size_kb": 64, 00:25:26.673 "state": "online", 00:25:26.673 "raid_level": "concat", 00:25:26.673 "superblock": true, 00:25:26.673 "num_base_bdevs": 4, 00:25:26.673 "num_base_bdevs_discovered": 4, 00:25:26.673 "num_base_bdevs_operational": 4, 00:25:26.673 "base_bdevs_list": [ 00:25:26.673 { 00:25:26.673 "name": "BaseBdev1", 00:25:26.673 "uuid": "b45fb432-d255-5913-bb53-17b9b273a909", 00:25:26.673 "is_configured": true, 00:25:26.673 "data_offset": 2048, 00:25:26.673 "data_size": 63488 00:25:26.673 }, 00:25:26.673 { 00:25:26.673 "name": "BaseBdev2", 00:25:26.673 "uuid": "a6202604-231c-5bed-857a-a243140a37bc", 00:25:26.673 "is_configured": true, 00:25:26.673 "data_offset": 2048, 00:25:26.673 "data_size": 63488 00:25:26.673 }, 00:25:26.673 { 00:25:26.673 "name": "BaseBdev3", 00:25:26.673 "uuid": "d4e35e2a-ad05-5ff1-943f-989ab83e7162", 00:25:26.673 "is_configured": true, 00:25:26.673 "data_offset": 2048, 00:25:26.673 "data_size": 63488 00:25:26.673 }, 00:25:26.673 { 00:25:26.673 "name": "BaseBdev4", 00:25:26.673 "uuid": "cba92391-b0a4-54c5-b0f7-0c2aaff91286", 00:25:26.673 "is_configured": true, 00:25:26.673 "data_offset": 2048, 00:25:26.673 "data_size": 63488 00:25:26.673 } 00:25:26.673 ] 00:25:26.673 }' 00:25:26.673 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:26.673 13:45:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.240 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:25:27.240 13:45:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:25:27.240 [2024-11-20 13:45:29.980011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:25:28.175 13:45:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:25:28.175 13:45:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.175 13:45:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.175 13:45:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.175 13:45:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:25:28.175 13:45:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:25:28.175 13:45:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:25:28.175 13:45:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:28.175 13:45:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:28.175 13:45:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:28.175 13:45:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:25:28.175 13:45:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:25:28.175 13:45:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:28.175 13:45:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:28.175 13:45:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:28.176 13:45:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:28.176 13:45:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:28.176 13:45:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:28.176 13:45:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.176 13:45:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.176 13:45:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.176 13:45:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.176 13:45:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:28.176 "name": "raid_bdev1", 00:25:28.176 "uuid": "8e0174c1-1fdf-49d1-a3b8-5e05b9943c25", 00:25:28.176 "strip_size_kb": 64, 00:25:28.176 "state": "online", 00:25:28.176 "raid_level": "concat", 00:25:28.176 "superblock": true, 00:25:28.176 "num_base_bdevs": 4, 00:25:28.176 "num_base_bdevs_discovered": 4, 00:25:28.176 "num_base_bdevs_operational": 4, 00:25:28.176 "base_bdevs_list": [ 00:25:28.176 { 00:25:28.176 "name": "BaseBdev1", 00:25:28.176 "uuid": "b45fb432-d255-5913-bb53-17b9b273a909", 00:25:28.176 "is_configured": true, 00:25:28.176 "data_offset": 2048, 00:25:28.176 "data_size": 63488 00:25:28.176 }, 00:25:28.176 { 00:25:28.176 "name": "BaseBdev2", 00:25:28.176 "uuid": "a6202604-231c-5bed-857a-a243140a37bc", 00:25:28.176 "is_configured": true, 00:25:28.176 "data_offset": 2048, 00:25:28.176 "data_size": 63488 00:25:28.176 }, 00:25:28.176 { 00:25:28.176 "name": "BaseBdev3", 00:25:28.176 "uuid": "d4e35e2a-ad05-5ff1-943f-989ab83e7162", 00:25:28.176 "is_configured": true, 00:25:28.176 "data_offset": 2048, 00:25:28.176 "data_size": 63488 00:25:28.176 }, 00:25:28.176 { 00:25:28.176 "name": "BaseBdev4", 00:25:28.176 "uuid": "cba92391-b0a4-54c5-b0f7-0c2aaff91286", 00:25:28.176 "is_configured": true, 00:25:28.176 "data_offset": 2048, 00:25:28.176 "data_size": 63488 00:25:28.176 } 00:25:28.176 ] 00:25:28.176 }' 00:25:28.176 13:45:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:28.176 13:45:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.743 13:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:28.743 13:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.743 13:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.743 [2024-11-20 13:45:31.391288] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:28.743 [2024-11-20 13:45:31.391475] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:28.743 [2024-11-20 13:45:31.394933] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:28.743 [2024-11-20 13:45:31.395016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:28.743 [2024-11-20 13:45:31.395075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:28.743 [2024-11-20 13:45:31.395092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:25:28.743 { 00:25:28.743 "results": [ 00:25:28.743 { 00:25:28.743 "job": "raid_bdev1", 00:25:28.743 "core_mask": "0x1", 00:25:28.743 "workload": "randrw", 00:25:28.743 "percentage": 50, 00:25:28.743 "status": "finished", 00:25:28.743 "queue_depth": 1, 00:25:28.743 "io_size": 131072, 00:25:28.743 "runtime": 1.408789, 00:25:28.743 "iops": 10462.177089684828, 00:25:28.743 "mibps": 1307.7721362106035, 00:25:28.743 "io_failed": 1, 00:25:28.743 "io_timeout": 0, 00:25:28.743 "avg_latency_us": 133.10516639940792, 00:25:28.743 "min_latency_us": 43.52, 00:25:28.743 "max_latency_us": 1832.0290909090909 00:25:28.743 } 00:25:28.743 ], 00:25:28.743 "core_count": 1 00:25:28.743 } 00:25:28.743 13:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.743 13:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73375 00:25:28.743 13:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73375 ']' 00:25:28.743 13:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73375 00:25:28.743 13:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:25:28.743 13:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:28.743 13:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73375 00:25:28.743 killing process with pid 73375 00:25:28.743 13:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:28.743 13:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:28.743 13:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73375' 00:25:28.743 13:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73375 00:25:28.743 13:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73375 00:25:28.743 [2024-11-20 13:45:31.430047] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:29.001 [2024-11-20 13:45:31.724981] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:29.935 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GlNLgkDLjK 00:25:29.935 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:25:29.935 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:25:29.935 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:25:29.935 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:25:29.935 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:29.935 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:25:29.935 13:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:25:29.935 00:25:29.935 real 0m4.871s 00:25:29.935 user 0m6.047s 00:25:29.935 sys 0m0.593s 00:25:29.935 ************************************ 00:25:29.935 END TEST raid_write_error_test 00:25:29.935 ************************************ 00:25:29.935 13:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:29.935 13:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.193 13:45:32 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:25:30.193 13:45:32 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:25:30.193 13:45:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:30.193 13:45:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:30.193 13:45:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:30.193 ************************************ 00:25:30.193 START TEST raid_state_function_test 00:25:30.193 ************************************ 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:25:30.193 Process raid pid: 73524 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73524 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73524' 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73524 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:30.193 13:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73524 ']' 00:25:30.194 13:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.194 13:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:30.194 13:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.194 13:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:30.194 13:45:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.194 [2024-11-20 13:45:33.012835] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:25:30.194 [2024-11-20 13:45:33.013303] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.452 [2024-11-20 13:45:33.198467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.452 [2024-11-20 13:45:33.329834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.710 [2024-11-20 13:45:33.536464] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:30.710 [2024-11-20 13:45:33.536715] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.277 [2024-11-20 13:45:34.086970] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:31.277 [2024-11-20 13:45:34.087175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:31.277 [2024-11-20 13:45:34.087340] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:31.277 [2024-11-20 13:45:34.087376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:31.277 [2024-11-20 13:45:34.087389] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:31.277 [2024-11-20 13:45:34.087405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:31.277 [2024-11-20 13:45:34.087415] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:31.277 [2024-11-20 13:45:34.087429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:31.277 "name": "Existed_Raid", 00:25:31.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.277 "strip_size_kb": 0, 00:25:31.277 "state": "configuring", 00:25:31.277 "raid_level": "raid1", 00:25:31.277 "superblock": false, 00:25:31.277 "num_base_bdevs": 4, 00:25:31.277 "num_base_bdevs_discovered": 0, 00:25:31.277 "num_base_bdevs_operational": 4, 00:25:31.277 "base_bdevs_list": [ 00:25:31.277 { 00:25:31.277 "name": "BaseBdev1", 00:25:31.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.277 "is_configured": false, 00:25:31.277 "data_offset": 0, 00:25:31.277 "data_size": 0 00:25:31.277 }, 00:25:31.277 { 00:25:31.277 "name": "BaseBdev2", 00:25:31.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.277 "is_configured": false, 00:25:31.277 "data_offset": 0, 00:25:31.277 "data_size": 0 00:25:31.277 }, 00:25:31.277 { 00:25:31.277 "name": "BaseBdev3", 00:25:31.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.277 "is_configured": false, 00:25:31.277 "data_offset": 0, 00:25:31.277 "data_size": 0 00:25:31.277 }, 00:25:31.277 { 00:25:31.277 "name": "BaseBdev4", 00:25:31.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.277 "is_configured": false, 00:25:31.277 "data_offset": 0, 00:25:31.277 "data_size": 0 00:25:31.277 } 00:25:31.277 ] 00:25:31.277 }' 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:31.277 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.846 [2024-11-20 13:45:34.627139] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:31.846 [2024-11-20 13:45:34.627212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.846 [2024-11-20 13:45:34.635118] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:31.846 [2024-11-20 13:45:34.635171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:31.846 [2024-11-20 13:45:34.635186] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:31.846 [2024-11-20 13:45:34.635212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:31.846 [2024-11-20 13:45:34.635224] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:31.846 [2024-11-20 13:45:34.635238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:31.846 [2024-11-20 13:45:34.635248] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:31.846 [2024-11-20 13:45:34.635261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.846 [2024-11-20 13:45:34.681856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:31.846 BaseBdev1 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.846 [ 00:25:31.846 { 00:25:31.846 "name": "BaseBdev1", 00:25:31.846 "aliases": [ 00:25:31.846 "abdc6655-8fb6-4c85-b0cf-6159f3d295bb" 00:25:31.846 ], 00:25:31.846 "product_name": "Malloc disk", 00:25:31.846 "block_size": 512, 00:25:31.846 "num_blocks": 65536, 00:25:31.846 "uuid": "abdc6655-8fb6-4c85-b0cf-6159f3d295bb", 00:25:31.846 "assigned_rate_limits": { 00:25:31.846 "rw_ios_per_sec": 0, 00:25:31.846 "rw_mbytes_per_sec": 0, 00:25:31.846 "r_mbytes_per_sec": 0, 00:25:31.846 "w_mbytes_per_sec": 0 00:25:31.846 }, 00:25:31.846 "claimed": true, 00:25:31.846 "claim_type": "exclusive_write", 00:25:31.846 "zoned": false, 00:25:31.846 "supported_io_types": { 00:25:31.846 "read": true, 00:25:31.846 "write": true, 00:25:31.846 "unmap": true, 00:25:31.846 "flush": true, 00:25:31.846 "reset": true, 00:25:31.846 "nvme_admin": false, 00:25:31.846 "nvme_io": false, 00:25:31.846 "nvme_io_md": false, 00:25:31.846 "write_zeroes": true, 00:25:31.846 "zcopy": true, 00:25:31.846 "get_zone_info": false, 00:25:31.846 "zone_management": false, 00:25:31.846 "zone_append": false, 00:25:31.846 "compare": false, 00:25:31.846 "compare_and_write": false, 00:25:31.846 "abort": true, 00:25:31.846 "seek_hole": false, 00:25:31.846 "seek_data": false, 00:25:31.846 "copy": true, 00:25:31.846 "nvme_iov_md": false 00:25:31.846 }, 00:25:31.846 "memory_domains": [ 00:25:31.846 { 00:25:31.846 "dma_device_id": "system", 00:25:31.846 "dma_device_type": 1 00:25:31.846 }, 00:25:31.846 { 00:25:31.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.846 "dma_device_type": 2 00:25:31.846 } 00:25:31.846 ], 00:25:31.846 "driver_specific": {} 00:25:31.846 } 00:25:31.846 ] 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.846 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.105 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:32.105 "name": "Existed_Raid", 00:25:32.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.105 "strip_size_kb": 0, 00:25:32.105 "state": "configuring", 00:25:32.105 "raid_level": "raid1", 00:25:32.105 "superblock": false, 00:25:32.105 "num_base_bdevs": 4, 00:25:32.105 "num_base_bdevs_discovered": 1, 00:25:32.105 "num_base_bdevs_operational": 4, 00:25:32.105 "base_bdevs_list": [ 00:25:32.105 { 00:25:32.105 "name": "BaseBdev1", 00:25:32.105 "uuid": "abdc6655-8fb6-4c85-b0cf-6159f3d295bb", 00:25:32.105 "is_configured": true, 00:25:32.105 "data_offset": 0, 00:25:32.105 "data_size": 65536 00:25:32.105 }, 00:25:32.105 { 00:25:32.105 "name": "BaseBdev2", 00:25:32.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.105 "is_configured": false, 00:25:32.105 "data_offset": 0, 00:25:32.105 "data_size": 0 00:25:32.105 }, 00:25:32.105 { 00:25:32.105 "name": "BaseBdev3", 00:25:32.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.105 "is_configured": false, 00:25:32.105 "data_offset": 0, 00:25:32.105 "data_size": 0 00:25:32.105 }, 00:25:32.105 { 00:25:32.105 "name": "BaseBdev4", 00:25:32.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.105 "is_configured": false, 00:25:32.105 "data_offset": 0, 00:25:32.105 "data_size": 0 00:25:32.105 } 00:25:32.105 ] 00:25:32.105 }' 00:25:32.105 13:45:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:32.105 13:45:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.364 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:32.364 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.364 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.364 [2024-11-20 13:45:35.238147] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:32.365 [2024-11-20 13:45:35.238395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:32.365 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.365 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:32.365 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.365 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.365 [2024-11-20 13:45:35.246199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:32.365 [2024-11-20 13:45:35.248894] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:32.365 [2024-11-20 13:45:35.249141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:32.365 [2024-11-20 13:45:35.249169] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:32.365 [2024-11-20 13:45:35.249188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:32.365 [2024-11-20 13:45:35.249199] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:32.365 [2024-11-20 13:45:35.249212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:32.365 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.365 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:32.365 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:32.365 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:32.365 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:32.365 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:32.365 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:32.365 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:32.365 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:32.365 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:32.365 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:32.365 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:32.365 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:32.365 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:32.365 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:32.365 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.365 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.365 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.624 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:32.624 "name": "Existed_Raid", 00:25:32.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.624 "strip_size_kb": 0, 00:25:32.624 "state": "configuring", 00:25:32.624 "raid_level": "raid1", 00:25:32.624 "superblock": false, 00:25:32.624 "num_base_bdevs": 4, 00:25:32.624 "num_base_bdevs_discovered": 1, 00:25:32.624 "num_base_bdevs_operational": 4, 00:25:32.625 "base_bdevs_list": [ 00:25:32.625 { 00:25:32.625 "name": "BaseBdev1", 00:25:32.625 "uuid": "abdc6655-8fb6-4c85-b0cf-6159f3d295bb", 00:25:32.625 "is_configured": true, 00:25:32.625 "data_offset": 0, 00:25:32.625 "data_size": 65536 00:25:32.625 }, 00:25:32.625 { 00:25:32.625 "name": "BaseBdev2", 00:25:32.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.625 "is_configured": false, 00:25:32.625 "data_offset": 0, 00:25:32.625 "data_size": 0 00:25:32.625 }, 00:25:32.625 { 00:25:32.625 "name": "BaseBdev3", 00:25:32.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.625 "is_configured": false, 00:25:32.625 "data_offset": 0, 00:25:32.625 "data_size": 0 00:25:32.625 }, 00:25:32.625 { 00:25:32.625 "name": "BaseBdev4", 00:25:32.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.625 "is_configured": false, 00:25:32.625 "data_offset": 0, 00:25:32.625 "data_size": 0 00:25:32.625 } 00:25:32.625 ] 00:25:32.625 }' 00:25:32.625 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:32.625 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.884 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:32.884 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.884 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.145 [2024-11-20 13:45:35.814360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:33.145 BaseBdev2 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.145 [ 00:25:33.145 { 00:25:33.145 "name": "BaseBdev2", 00:25:33.145 "aliases": [ 00:25:33.145 "3ffd0948-5273-42c7-a18e-fc7442adf9d9" 00:25:33.145 ], 00:25:33.145 "product_name": "Malloc disk", 00:25:33.145 "block_size": 512, 00:25:33.145 "num_blocks": 65536, 00:25:33.145 "uuid": "3ffd0948-5273-42c7-a18e-fc7442adf9d9", 00:25:33.145 "assigned_rate_limits": { 00:25:33.145 "rw_ios_per_sec": 0, 00:25:33.145 "rw_mbytes_per_sec": 0, 00:25:33.145 "r_mbytes_per_sec": 0, 00:25:33.145 "w_mbytes_per_sec": 0 00:25:33.145 }, 00:25:33.145 "claimed": true, 00:25:33.145 "claim_type": "exclusive_write", 00:25:33.145 "zoned": false, 00:25:33.145 "supported_io_types": { 00:25:33.145 "read": true, 00:25:33.145 "write": true, 00:25:33.145 "unmap": true, 00:25:33.145 "flush": true, 00:25:33.145 "reset": true, 00:25:33.145 "nvme_admin": false, 00:25:33.145 "nvme_io": false, 00:25:33.145 "nvme_io_md": false, 00:25:33.145 "write_zeroes": true, 00:25:33.145 "zcopy": true, 00:25:33.145 "get_zone_info": false, 00:25:33.145 "zone_management": false, 00:25:33.145 "zone_append": false, 00:25:33.145 "compare": false, 00:25:33.145 "compare_and_write": false, 00:25:33.145 "abort": true, 00:25:33.145 "seek_hole": false, 00:25:33.145 "seek_data": false, 00:25:33.145 "copy": true, 00:25:33.145 "nvme_iov_md": false 00:25:33.145 }, 00:25:33.145 "memory_domains": [ 00:25:33.145 { 00:25:33.145 "dma_device_id": "system", 00:25:33.145 "dma_device_type": 1 00:25:33.145 }, 00:25:33.145 { 00:25:33.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.145 "dma_device_type": 2 00:25:33.145 } 00:25:33.145 ], 00:25:33.145 "driver_specific": {} 00:25:33.145 } 00:25:33.145 ] 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:33.145 "name": "Existed_Raid", 00:25:33.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.145 "strip_size_kb": 0, 00:25:33.145 "state": "configuring", 00:25:33.145 "raid_level": "raid1", 00:25:33.145 "superblock": false, 00:25:33.145 "num_base_bdevs": 4, 00:25:33.145 "num_base_bdevs_discovered": 2, 00:25:33.145 "num_base_bdevs_operational": 4, 00:25:33.145 "base_bdevs_list": [ 00:25:33.145 { 00:25:33.145 "name": "BaseBdev1", 00:25:33.145 "uuid": "abdc6655-8fb6-4c85-b0cf-6159f3d295bb", 00:25:33.145 "is_configured": true, 00:25:33.145 "data_offset": 0, 00:25:33.145 "data_size": 65536 00:25:33.145 }, 00:25:33.145 { 00:25:33.145 "name": "BaseBdev2", 00:25:33.145 "uuid": "3ffd0948-5273-42c7-a18e-fc7442adf9d9", 00:25:33.145 "is_configured": true, 00:25:33.145 "data_offset": 0, 00:25:33.145 "data_size": 65536 00:25:33.145 }, 00:25:33.145 { 00:25:33.145 "name": "BaseBdev3", 00:25:33.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.145 "is_configured": false, 00:25:33.145 "data_offset": 0, 00:25:33.145 "data_size": 0 00:25:33.145 }, 00:25:33.145 { 00:25:33.145 "name": "BaseBdev4", 00:25:33.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.145 "is_configured": false, 00:25:33.145 "data_offset": 0, 00:25:33.145 "data_size": 0 00:25:33.145 } 00:25:33.145 ] 00:25:33.145 }' 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:33.145 13:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.718 13:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:33.718 13:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.718 13:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.718 [2024-11-20 13:45:36.461435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:33.718 BaseBdev3 00:25:33.718 13:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.718 13:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:25:33.718 13:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:33.718 13:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:33.718 13:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:33.718 13:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:33.718 13:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:33.718 13:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:33.718 13:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.718 13:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.718 13:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.719 [ 00:25:33.719 { 00:25:33.719 "name": "BaseBdev3", 00:25:33.719 "aliases": [ 00:25:33.719 "ba0fa38e-0b28-4249-9a3c-f3950e649aca" 00:25:33.719 ], 00:25:33.719 "product_name": "Malloc disk", 00:25:33.719 "block_size": 512, 00:25:33.719 "num_blocks": 65536, 00:25:33.719 "uuid": "ba0fa38e-0b28-4249-9a3c-f3950e649aca", 00:25:33.719 "assigned_rate_limits": { 00:25:33.719 "rw_ios_per_sec": 0, 00:25:33.719 "rw_mbytes_per_sec": 0, 00:25:33.719 "r_mbytes_per_sec": 0, 00:25:33.719 "w_mbytes_per_sec": 0 00:25:33.719 }, 00:25:33.719 "claimed": true, 00:25:33.719 "claim_type": "exclusive_write", 00:25:33.719 "zoned": false, 00:25:33.719 "supported_io_types": { 00:25:33.719 "read": true, 00:25:33.719 "write": true, 00:25:33.719 "unmap": true, 00:25:33.719 "flush": true, 00:25:33.719 "reset": true, 00:25:33.719 "nvme_admin": false, 00:25:33.719 "nvme_io": false, 00:25:33.719 "nvme_io_md": false, 00:25:33.719 "write_zeroes": true, 00:25:33.719 "zcopy": true, 00:25:33.719 "get_zone_info": false, 00:25:33.719 "zone_management": false, 00:25:33.719 "zone_append": false, 00:25:33.719 "compare": false, 00:25:33.719 "compare_and_write": false, 00:25:33.719 "abort": true, 00:25:33.719 "seek_hole": false, 00:25:33.719 "seek_data": false, 00:25:33.719 "copy": true, 00:25:33.719 "nvme_iov_md": false 00:25:33.719 }, 00:25:33.719 "memory_domains": [ 00:25:33.719 { 00:25:33.719 "dma_device_id": "system", 00:25:33.719 "dma_device_type": 1 00:25:33.719 }, 00:25:33.719 { 00:25:33.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.719 "dma_device_type": 2 00:25:33.719 } 00:25:33.719 ], 00:25:33.719 "driver_specific": {} 00:25:33.719 } 00:25:33.719 ] 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:33.719 "name": "Existed_Raid", 00:25:33.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.719 "strip_size_kb": 0, 00:25:33.719 "state": "configuring", 00:25:33.719 "raid_level": "raid1", 00:25:33.719 "superblock": false, 00:25:33.719 "num_base_bdevs": 4, 00:25:33.719 "num_base_bdevs_discovered": 3, 00:25:33.719 "num_base_bdevs_operational": 4, 00:25:33.719 "base_bdevs_list": [ 00:25:33.719 { 00:25:33.719 "name": "BaseBdev1", 00:25:33.719 "uuid": "abdc6655-8fb6-4c85-b0cf-6159f3d295bb", 00:25:33.719 "is_configured": true, 00:25:33.719 "data_offset": 0, 00:25:33.719 "data_size": 65536 00:25:33.719 }, 00:25:33.719 { 00:25:33.719 "name": "BaseBdev2", 00:25:33.719 "uuid": "3ffd0948-5273-42c7-a18e-fc7442adf9d9", 00:25:33.719 "is_configured": true, 00:25:33.719 "data_offset": 0, 00:25:33.719 "data_size": 65536 00:25:33.719 }, 00:25:33.719 { 00:25:33.719 "name": "BaseBdev3", 00:25:33.719 "uuid": "ba0fa38e-0b28-4249-9a3c-f3950e649aca", 00:25:33.719 "is_configured": true, 00:25:33.719 "data_offset": 0, 00:25:33.719 "data_size": 65536 00:25:33.719 }, 00:25:33.719 { 00:25:33.719 "name": "BaseBdev4", 00:25:33.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.719 "is_configured": false, 00:25:33.719 "data_offset": 0, 00:25:33.719 "data_size": 0 00:25:33.719 } 00:25:33.719 ] 00:25:33.719 }' 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:33.719 13:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.285 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:25:34.285 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.285 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.285 [2024-11-20 13:45:37.073689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:34.285 [2024-11-20 13:45:37.073765] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:34.285 [2024-11-20 13:45:37.073779] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:34.285 [2024-11-20 13:45:37.074222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:34.285 [2024-11-20 13:45:37.074457] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:34.285 [2024-11-20 13:45:37.074477] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:34.285 [2024-11-20 13:45:37.074805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:34.285 BaseBdev4 00:25:34.285 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.285 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:25:34.285 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:25:34.285 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:34.285 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:34.285 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:34.285 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:34.285 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:34.285 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.285 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.285 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.285 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:34.285 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.285 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.285 [ 00:25:34.285 { 00:25:34.285 "name": "BaseBdev4", 00:25:34.285 "aliases": [ 00:25:34.285 "3b709e7c-70f7-45ee-83c4-38f1c7c57acc" 00:25:34.285 ], 00:25:34.285 "product_name": "Malloc disk", 00:25:34.285 "block_size": 512, 00:25:34.285 "num_blocks": 65536, 00:25:34.285 "uuid": "3b709e7c-70f7-45ee-83c4-38f1c7c57acc", 00:25:34.285 "assigned_rate_limits": { 00:25:34.285 "rw_ios_per_sec": 0, 00:25:34.285 "rw_mbytes_per_sec": 0, 00:25:34.285 "r_mbytes_per_sec": 0, 00:25:34.285 "w_mbytes_per_sec": 0 00:25:34.285 }, 00:25:34.285 "claimed": true, 00:25:34.285 "claim_type": "exclusive_write", 00:25:34.285 "zoned": false, 00:25:34.285 "supported_io_types": { 00:25:34.285 "read": true, 00:25:34.285 "write": true, 00:25:34.285 "unmap": true, 00:25:34.285 "flush": true, 00:25:34.285 "reset": true, 00:25:34.285 "nvme_admin": false, 00:25:34.285 "nvme_io": false, 00:25:34.285 "nvme_io_md": false, 00:25:34.285 "write_zeroes": true, 00:25:34.285 "zcopy": true, 00:25:34.285 "get_zone_info": false, 00:25:34.285 "zone_management": false, 00:25:34.285 "zone_append": false, 00:25:34.285 "compare": false, 00:25:34.285 "compare_and_write": false, 00:25:34.285 "abort": true, 00:25:34.285 "seek_hole": false, 00:25:34.285 "seek_data": false, 00:25:34.285 "copy": true, 00:25:34.285 "nvme_iov_md": false 00:25:34.285 }, 00:25:34.285 "memory_domains": [ 00:25:34.285 { 00:25:34.285 "dma_device_id": "system", 00:25:34.286 "dma_device_type": 1 00:25:34.286 }, 00:25:34.286 { 00:25:34.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.286 "dma_device_type": 2 00:25:34.286 } 00:25:34.286 ], 00:25:34.286 "driver_specific": {} 00:25:34.286 } 00:25:34.286 ] 00:25:34.286 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.286 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:34.286 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:34.286 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:34.286 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:34.286 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:34.286 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:34.286 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:34.286 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:34.286 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:34.286 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:34.286 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:34.286 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:34.286 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:34.286 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:34.286 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:34.286 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.286 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.286 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.286 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:34.286 "name": "Existed_Raid", 00:25:34.286 "uuid": "0db6365b-f487-4134-8d14-76dcb14c99f2", 00:25:34.286 "strip_size_kb": 0, 00:25:34.286 "state": "online", 00:25:34.286 "raid_level": "raid1", 00:25:34.286 "superblock": false, 00:25:34.286 "num_base_bdevs": 4, 00:25:34.286 "num_base_bdevs_discovered": 4, 00:25:34.286 "num_base_bdevs_operational": 4, 00:25:34.286 "base_bdevs_list": [ 00:25:34.286 { 00:25:34.286 "name": "BaseBdev1", 00:25:34.286 "uuid": "abdc6655-8fb6-4c85-b0cf-6159f3d295bb", 00:25:34.286 "is_configured": true, 00:25:34.286 "data_offset": 0, 00:25:34.286 "data_size": 65536 00:25:34.286 }, 00:25:34.286 { 00:25:34.286 "name": "BaseBdev2", 00:25:34.286 "uuid": "3ffd0948-5273-42c7-a18e-fc7442adf9d9", 00:25:34.286 "is_configured": true, 00:25:34.286 "data_offset": 0, 00:25:34.286 "data_size": 65536 00:25:34.286 }, 00:25:34.286 { 00:25:34.286 "name": "BaseBdev3", 00:25:34.286 "uuid": "ba0fa38e-0b28-4249-9a3c-f3950e649aca", 00:25:34.286 "is_configured": true, 00:25:34.286 "data_offset": 0, 00:25:34.286 "data_size": 65536 00:25:34.286 }, 00:25:34.286 { 00:25:34.286 "name": "BaseBdev4", 00:25:34.286 "uuid": "3b709e7c-70f7-45ee-83c4-38f1c7c57acc", 00:25:34.286 "is_configured": true, 00:25:34.286 "data_offset": 0, 00:25:34.286 "data_size": 65536 00:25:34.286 } 00:25:34.286 ] 00:25:34.286 }' 00:25:34.286 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:34.286 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.853 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:34.853 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:34.853 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:34.853 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:34.853 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:34.853 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:34.853 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:34.853 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:34.853 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.853 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.853 [2024-11-20 13:45:37.630400] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:34.853 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.853 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:34.853 "name": "Existed_Raid", 00:25:34.853 "aliases": [ 00:25:34.853 "0db6365b-f487-4134-8d14-76dcb14c99f2" 00:25:34.853 ], 00:25:34.853 "product_name": "Raid Volume", 00:25:34.853 "block_size": 512, 00:25:34.853 "num_blocks": 65536, 00:25:34.853 "uuid": "0db6365b-f487-4134-8d14-76dcb14c99f2", 00:25:34.853 "assigned_rate_limits": { 00:25:34.853 "rw_ios_per_sec": 0, 00:25:34.853 "rw_mbytes_per_sec": 0, 00:25:34.853 "r_mbytes_per_sec": 0, 00:25:34.853 "w_mbytes_per_sec": 0 00:25:34.853 }, 00:25:34.853 "claimed": false, 00:25:34.853 "zoned": false, 00:25:34.853 "supported_io_types": { 00:25:34.853 "read": true, 00:25:34.853 "write": true, 00:25:34.853 "unmap": false, 00:25:34.853 "flush": false, 00:25:34.853 "reset": true, 00:25:34.853 "nvme_admin": false, 00:25:34.853 "nvme_io": false, 00:25:34.853 "nvme_io_md": false, 00:25:34.853 "write_zeroes": true, 00:25:34.853 "zcopy": false, 00:25:34.853 "get_zone_info": false, 00:25:34.853 "zone_management": false, 00:25:34.853 "zone_append": false, 00:25:34.853 "compare": false, 00:25:34.853 "compare_and_write": false, 00:25:34.853 "abort": false, 00:25:34.853 "seek_hole": false, 00:25:34.853 "seek_data": false, 00:25:34.853 "copy": false, 00:25:34.853 "nvme_iov_md": false 00:25:34.853 }, 00:25:34.853 "memory_domains": [ 00:25:34.853 { 00:25:34.853 "dma_device_id": "system", 00:25:34.853 "dma_device_type": 1 00:25:34.853 }, 00:25:34.853 { 00:25:34.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.853 "dma_device_type": 2 00:25:34.853 }, 00:25:34.853 { 00:25:34.853 "dma_device_id": "system", 00:25:34.853 "dma_device_type": 1 00:25:34.853 }, 00:25:34.853 { 00:25:34.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.853 "dma_device_type": 2 00:25:34.853 }, 00:25:34.853 { 00:25:34.853 "dma_device_id": "system", 00:25:34.853 "dma_device_type": 1 00:25:34.853 }, 00:25:34.853 { 00:25:34.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.853 "dma_device_type": 2 00:25:34.853 }, 00:25:34.853 { 00:25:34.853 "dma_device_id": "system", 00:25:34.853 "dma_device_type": 1 00:25:34.853 }, 00:25:34.853 { 00:25:34.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.853 "dma_device_type": 2 00:25:34.853 } 00:25:34.853 ], 00:25:34.853 "driver_specific": { 00:25:34.853 "raid": { 00:25:34.853 "uuid": "0db6365b-f487-4134-8d14-76dcb14c99f2", 00:25:34.853 "strip_size_kb": 0, 00:25:34.853 "state": "online", 00:25:34.853 "raid_level": "raid1", 00:25:34.853 "superblock": false, 00:25:34.853 "num_base_bdevs": 4, 00:25:34.853 "num_base_bdevs_discovered": 4, 00:25:34.853 "num_base_bdevs_operational": 4, 00:25:34.853 "base_bdevs_list": [ 00:25:34.854 { 00:25:34.854 "name": "BaseBdev1", 00:25:34.854 "uuid": "abdc6655-8fb6-4c85-b0cf-6159f3d295bb", 00:25:34.854 "is_configured": true, 00:25:34.854 "data_offset": 0, 00:25:34.854 "data_size": 65536 00:25:34.854 }, 00:25:34.854 { 00:25:34.854 "name": "BaseBdev2", 00:25:34.854 "uuid": "3ffd0948-5273-42c7-a18e-fc7442adf9d9", 00:25:34.854 "is_configured": true, 00:25:34.854 "data_offset": 0, 00:25:34.854 "data_size": 65536 00:25:34.854 }, 00:25:34.854 { 00:25:34.854 "name": "BaseBdev3", 00:25:34.854 "uuid": "ba0fa38e-0b28-4249-9a3c-f3950e649aca", 00:25:34.854 "is_configured": true, 00:25:34.854 "data_offset": 0, 00:25:34.854 "data_size": 65536 00:25:34.854 }, 00:25:34.854 { 00:25:34.854 "name": "BaseBdev4", 00:25:34.854 "uuid": "3b709e7c-70f7-45ee-83c4-38f1c7c57acc", 00:25:34.854 "is_configured": true, 00:25:34.854 "data_offset": 0, 00:25:34.854 "data_size": 65536 00:25:34.854 } 00:25:34.854 ] 00:25:34.854 } 00:25:34.854 } 00:25:34.854 }' 00:25:34.854 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:34.854 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:34.854 BaseBdev2 00:25:34.854 BaseBdev3 00:25:34.854 BaseBdev4' 00:25:34.854 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:35.112 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:35.112 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:35.112 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:35.112 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:35.112 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.112 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.112 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.112 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:35.112 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:35.112 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:35.112 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:35.112 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:35.112 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.112 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.112 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.112 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:35.112 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:35.112 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:35.113 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:35.113 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:35.113 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.113 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.113 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.113 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:35.113 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:35.113 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:35.113 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:35.113 13:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:25:35.113 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.113 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.113 13:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.113 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:35.113 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:35.113 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:35.113 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.113 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.113 [2024-11-20 13:45:38.010242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:35.371 "name": "Existed_Raid", 00:25:35.371 "uuid": "0db6365b-f487-4134-8d14-76dcb14c99f2", 00:25:35.371 "strip_size_kb": 0, 00:25:35.371 "state": "online", 00:25:35.371 "raid_level": "raid1", 00:25:35.371 "superblock": false, 00:25:35.371 "num_base_bdevs": 4, 00:25:35.371 "num_base_bdevs_discovered": 3, 00:25:35.371 "num_base_bdevs_operational": 3, 00:25:35.371 "base_bdevs_list": [ 00:25:35.371 { 00:25:35.371 "name": null, 00:25:35.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.371 "is_configured": false, 00:25:35.371 "data_offset": 0, 00:25:35.371 "data_size": 65536 00:25:35.371 }, 00:25:35.371 { 00:25:35.371 "name": "BaseBdev2", 00:25:35.371 "uuid": "3ffd0948-5273-42c7-a18e-fc7442adf9d9", 00:25:35.371 "is_configured": true, 00:25:35.371 "data_offset": 0, 00:25:35.371 "data_size": 65536 00:25:35.371 }, 00:25:35.371 { 00:25:35.371 "name": "BaseBdev3", 00:25:35.371 "uuid": "ba0fa38e-0b28-4249-9a3c-f3950e649aca", 00:25:35.371 "is_configured": true, 00:25:35.371 "data_offset": 0, 00:25:35.371 "data_size": 65536 00:25:35.371 }, 00:25:35.371 { 00:25:35.371 "name": "BaseBdev4", 00:25:35.371 "uuid": "3b709e7c-70f7-45ee-83c4-38f1c7c57acc", 00:25:35.371 "is_configured": true, 00:25:35.371 "data_offset": 0, 00:25:35.371 "data_size": 65536 00:25:35.371 } 00:25:35.371 ] 00:25:35.371 }' 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:35.371 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.937 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:35.937 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:35.937 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:35.937 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:35.937 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.937 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.937 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.937 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:35.937 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:35.937 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:35.937 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.937 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.937 [2024-11-20 13:45:38.706683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:35.937 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.937 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:35.937 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:35.937 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:35.937 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.937 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:35.937 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:35.937 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.196 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:36.196 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:36.196 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:25:36.196 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.196 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.196 [2024-11-20 13:45:38.857132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:36.196 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.196 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:36.196 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:36.196 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:36.196 13:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:36.196 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.196 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.196 13:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.196 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:36.196 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:36.196 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:25:36.196 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.196 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.196 [2024-11-20 13:45:39.009448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:36.196 [2024-11-20 13:45:39.009599] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:36.196 [2024-11-20 13:45:39.097770] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:36.196 [2024-11-20 13:45:39.097845] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:36.196 [2024-11-20 13:45:39.097867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:36.196 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.196 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:36.196 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:36.196 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:36.196 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.196 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.196 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:36.196 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.454 BaseBdev2 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.454 [ 00:25:36.454 { 00:25:36.454 "name": "BaseBdev2", 00:25:36.454 "aliases": [ 00:25:36.454 "b624b66d-4ec1-4499-bf55-8bdc16ead411" 00:25:36.454 ], 00:25:36.454 "product_name": "Malloc disk", 00:25:36.454 "block_size": 512, 00:25:36.454 "num_blocks": 65536, 00:25:36.454 "uuid": "b624b66d-4ec1-4499-bf55-8bdc16ead411", 00:25:36.454 "assigned_rate_limits": { 00:25:36.454 "rw_ios_per_sec": 0, 00:25:36.454 "rw_mbytes_per_sec": 0, 00:25:36.454 "r_mbytes_per_sec": 0, 00:25:36.454 "w_mbytes_per_sec": 0 00:25:36.454 }, 00:25:36.454 "claimed": false, 00:25:36.454 "zoned": false, 00:25:36.454 "supported_io_types": { 00:25:36.454 "read": true, 00:25:36.454 "write": true, 00:25:36.454 "unmap": true, 00:25:36.454 "flush": true, 00:25:36.454 "reset": true, 00:25:36.454 "nvme_admin": false, 00:25:36.454 "nvme_io": false, 00:25:36.454 "nvme_io_md": false, 00:25:36.454 "write_zeroes": true, 00:25:36.454 "zcopy": true, 00:25:36.454 "get_zone_info": false, 00:25:36.454 "zone_management": false, 00:25:36.454 "zone_append": false, 00:25:36.454 "compare": false, 00:25:36.454 "compare_and_write": false, 00:25:36.454 "abort": true, 00:25:36.454 "seek_hole": false, 00:25:36.454 "seek_data": false, 00:25:36.454 "copy": true, 00:25:36.454 "nvme_iov_md": false 00:25:36.454 }, 00:25:36.454 "memory_domains": [ 00:25:36.454 { 00:25:36.454 "dma_device_id": "system", 00:25:36.454 "dma_device_type": 1 00:25:36.454 }, 00:25:36.454 { 00:25:36.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:36.454 "dma_device_type": 2 00:25:36.454 } 00:25:36.454 ], 00:25:36.454 "driver_specific": {} 00:25:36.454 } 00:25:36.454 ] 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.454 BaseBdev3 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:36.454 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.455 [ 00:25:36.455 { 00:25:36.455 "name": "BaseBdev3", 00:25:36.455 "aliases": [ 00:25:36.455 "c8c0d618-f65d-4ecb-899c-5e5943710db6" 00:25:36.455 ], 00:25:36.455 "product_name": "Malloc disk", 00:25:36.455 "block_size": 512, 00:25:36.455 "num_blocks": 65536, 00:25:36.455 "uuid": "c8c0d618-f65d-4ecb-899c-5e5943710db6", 00:25:36.455 "assigned_rate_limits": { 00:25:36.455 "rw_ios_per_sec": 0, 00:25:36.455 "rw_mbytes_per_sec": 0, 00:25:36.455 "r_mbytes_per_sec": 0, 00:25:36.455 "w_mbytes_per_sec": 0 00:25:36.455 }, 00:25:36.455 "claimed": false, 00:25:36.455 "zoned": false, 00:25:36.455 "supported_io_types": { 00:25:36.455 "read": true, 00:25:36.455 "write": true, 00:25:36.455 "unmap": true, 00:25:36.455 "flush": true, 00:25:36.455 "reset": true, 00:25:36.455 "nvme_admin": false, 00:25:36.455 "nvme_io": false, 00:25:36.455 "nvme_io_md": false, 00:25:36.455 "write_zeroes": true, 00:25:36.455 "zcopy": true, 00:25:36.455 "get_zone_info": false, 00:25:36.455 "zone_management": false, 00:25:36.455 "zone_append": false, 00:25:36.455 "compare": false, 00:25:36.455 "compare_and_write": false, 00:25:36.455 "abort": true, 00:25:36.455 "seek_hole": false, 00:25:36.455 "seek_data": false, 00:25:36.455 "copy": true, 00:25:36.455 "nvme_iov_md": false 00:25:36.455 }, 00:25:36.455 "memory_domains": [ 00:25:36.455 { 00:25:36.455 "dma_device_id": "system", 00:25:36.455 "dma_device_type": 1 00:25:36.455 }, 00:25:36.455 { 00:25:36.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:36.455 "dma_device_type": 2 00:25:36.455 } 00:25:36.455 ], 00:25:36.455 "driver_specific": {} 00:25:36.455 } 00:25:36.455 ] 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.455 BaseBdev4 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.455 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.714 [ 00:25:36.714 { 00:25:36.714 "name": "BaseBdev4", 00:25:36.714 "aliases": [ 00:25:36.714 "9ddb1c01-a538-42a6-913e-731863ba8a36" 00:25:36.714 ], 00:25:36.714 "product_name": "Malloc disk", 00:25:36.714 "block_size": 512, 00:25:36.714 "num_blocks": 65536, 00:25:36.714 "uuid": "9ddb1c01-a538-42a6-913e-731863ba8a36", 00:25:36.714 "assigned_rate_limits": { 00:25:36.714 "rw_ios_per_sec": 0, 00:25:36.714 "rw_mbytes_per_sec": 0, 00:25:36.714 "r_mbytes_per_sec": 0, 00:25:36.714 "w_mbytes_per_sec": 0 00:25:36.714 }, 00:25:36.714 "claimed": false, 00:25:36.714 "zoned": false, 00:25:36.714 "supported_io_types": { 00:25:36.714 "read": true, 00:25:36.714 "write": true, 00:25:36.714 "unmap": true, 00:25:36.714 "flush": true, 00:25:36.714 "reset": true, 00:25:36.714 "nvme_admin": false, 00:25:36.714 "nvme_io": false, 00:25:36.714 "nvme_io_md": false, 00:25:36.714 "write_zeroes": true, 00:25:36.714 "zcopy": true, 00:25:36.714 "get_zone_info": false, 00:25:36.714 "zone_management": false, 00:25:36.714 "zone_append": false, 00:25:36.714 "compare": false, 00:25:36.714 "compare_and_write": false, 00:25:36.714 "abort": true, 00:25:36.714 "seek_hole": false, 00:25:36.714 "seek_data": false, 00:25:36.714 "copy": true, 00:25:36.714 "nvme_iov_md": false 00:25:36.714 }, 00:25:36.714 "memory_domains": [ 00:25:36.714 { 00:25:36.714 "dma_device_id": "system", 00:25:36.714 "dma_device_type": 1 00:25:36.714 }, 00:25:36.714 { 00:25:36.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:36.714 "dma_device_type": 2 00:25:36.714 } 00:25:36.714 ], 00:25:36.714 "driver_specific": {} 00:25:36.714 } 00:25:36.714 ] 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.714 [2024-11-20 13:45:39.386209] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:36.714 [2024-11-20 13:45:39.386465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:36.714 [2024-11-20 13:45:39.386608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:36.714 [2024-11-20 13:45:39.389304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:36.714 [2024-11-20 13:45:39.389551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:36.714 "name": "Existed_Raid", 00:25:36.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.714 "strip_size_kb": 0, 00:25:36.714 "state": "configuring", 00:25:36.714 "raid_level": "raid1", 00:25:36.714 "superblock": false, 00:25:36.714 "num_base_bdevs": 4, 00:25:36.714 "num_base_bdevs_discovered": 3, 00:25:36.714 "num_base_bdevs_operational": 4, 00:25:36.714 "base_bdevs_list": [ 00:25:36.714 { 00:25:36.714 "name": "BaseBdev1", 00:25:36.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.714 "is_configured": false, 00:25:36.714 "data_offset": 0, 00:25:36.714 "data_size": 0 00:25:36.714 }, 00:25:36.714 { 00:25:36.714 "name": "BaseBdev2", 00:25:36.714 "uuid": "b624b66d-4ec1-4499-bf55-8bdc16ead411", 00:25:36.714 "is_configured": true, 00:25:36.714 "data_offset": 0, 00:25:36.714 "data_size": 65536 00:25:36.714 }, 00:25:36.714 { 00:25:36.714 "name": "BaseBdev3", 00:25:36.714 "uuid": "c8c0d618-f65d-4ecb-899c-5e5943710db6", 00:25:36.714 "is_configured": true, 00:25:36.714 "data_offset": 0, 00:25:36.714 "data_size": 65536 00:25:36.714 }, 00:25:36.714 { 00:25:36.714 "name": "BaseBdev4", 00:25:36.714 "uuid": "9ddb1c01-a538-42a6-913e-731863ba8a36", 00:25:36.714 "is_configured": true, 00:25:36.714 "data_offset": 0, 00:25:36.714 "data_size": 65536 00:25:36.714 } 00:25:36.714 ] 00:25:36.714 }' 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:36.714 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.280 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:25:37.280 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.280 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.280 [2024-11-20 13:45:39.926446] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:37.280 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.280 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:37.280 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:37.280 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:37.280 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:37.280 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:37.280 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:37.280 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:37.280 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:37.280 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:37.280 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:37.280 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.280 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:37.280 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.280 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.280 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.280 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:37.280 "name": "Existed_Raid", 00:25:37.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.280 "strip_size_kb": 0, 00:25:37.280 "state": "configuring", 00:25:37.280 "raid_level": "raid1", 00:25:37.280 "superblock": false, 00:25:37.280 "num_base_bdevs": 4, 00:25:37.280 "num_base_bdevs_discovered": 2, 00:25:37.280 "num_base_bdevs_operational": 4, 00:25:37.280 "base_bdevs_list": [ 00:25:37.281 { 00:25:37.281 "name": "BaseBdev1", 00:25:37.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.281 "is_configured": false, 00:25:37.281 "data_offset": 0, 00:25:37.281 "data_size": 0 00:25:37.281 }, 00:25:37.281 { 00:25:37.281 "name": null, 00:25:37.281 "uuid": "b624b66d-4ec1-4499-bf55-8bdc16ead411", 00:25:37.281 "is_configured": false, 00:25:37.281 "data_offset": 0, 00:25:37.281 "data_size": 65536 00:25:37.281 }, 00:25:37.281 { 00:25:37.281 "name": "BaseBdev3", 00:25:37.281 "uuid": "c8c0d618-f65d-4ecb-899c-5e5943710db6", 00:25:37.281 "is_configured": true, 00:25:37.281 "data_offset": 0, 00:25:37.281 "data_size": 65536 00:25:37.281 }, 00:25:37.281 { 00:25:37.281 "name": "BaseBdev4", 00:25:37.281 "uuid": "9ddb1c01-a538-42a6-913e-731863ba8a36", 00:25:37.281 "is_configured": true, 00:25:37.281 "data_offset": 0, 00:25:37.281 "data_size": 65536 00:25:37.281 } 00:25:37.281 ] 00:25:37.281 }' 00:25:37.281 13:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:37.281 13:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.848 [2024-11-20 13:45:40.546579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:37.848 BaseBdev1 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.848 [ 00:25:37.848 { 00:25:37.848 "name": "BaseBdev1", 00:25:37.848 "aliases": [ 00:25:37.848 "03d98670-75dc-445f-89c5-0b49a15080df" 00:25:37.848 ], 00:25:37.848 "product_name": "Malloc disk", 00:25:37.848 "block_size": 512, 00:25:37.848 "num_blocks": 65536, 00:25:37.848 "uuid": "03d98670-75dc-445f-89c5-0b49a15080df", 00:25:37.848 "assigned_rate_limits": { 00:25:37.848 "rw_ios_per_sec": 0, 00:25:37.848 "rw_mbytes_per_sec": 0, 00:25:37.848 "r_mbytes_per_sec": 0, 00:25:37.848 "w_mbytes_per_sec": 0 00:25:37.848 }, 00:25:37.848 "claimed": true, 00:25:37.848 "claim_type": "exclusive_write", 00:25:37.848 "zoned": false, 00:25:37.848 "supported_io_types": { 00:25:37.848 "read": true, 00:25:37.848 "write": true, 00:25:37.848 "unmap": true, 00:25:37.848 "flush": true, 00:25:37.848 "reset": true, 00:25:37.848 "nvme_admin": false, 00:25:37.848 "nvme_io": false, 00:25:37.848 "nvme_io_md": false, 00:25:37.848 "write_zeroes": true, 00:25:37.848 "zcopy": true, 00:25:37.848 "get_zone_info": false, 00:25:37.848 "zone_management": false, 00:25:37.848 "zone_append": false, 00:25:37.848 "compare": false, 00:25:37.848 "compare_and_write": false, 00:25:37.848 "abort": true, 00:25:37.848 "seek_hole": false, 00:25:37.848 "seek_data": false, 00:25:37.848 "copy": true, 00:25:37.848 "nvme_iov_md": false 00:25:37.848 }, 00:25:37.848 "memory_domains": [ 00:25:37.848 { 00:25:37.848 "dma_device_id": "system", 00:25:37.848 "dma_device_type": 1 00:25:37.848 }, 00:25:37.848 { 00:25:37.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:37.848 "dma_device_type": 2 00:25:37.848 } 00:25:37.848 ], 00:25:37.848 "driver_specific": {} 00:25:37.848 } 00:25:37.848 ] 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:37.848 "name": "Existed_Raid", 00:25:37.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:37.848 "strip_size_kb": 0, 00:25:37.848 "state": "configuring", 00:25:37.848 "raid_level": "raid1", 00:25:37.848 "superblock": false, 00:25:37.848 "num_base_bdevs": 4, 00:25:37.848 "num_base_bdevs_discovered": 3, 00:25:37.848 "num_base_bdevs_operational": 4, 00:25:37.848 "base_bdevs_list": [ 00:25:37.848 { 00:25:37.848 "name": "BaseBdev1", 00:25:37.848 "uuid": "03d98670-75dc-445f-89c5-0b49a15080df", 00:25:37.848 "is_configured": true, 00:25:37.848 "data_offset": 0, 00:25:37.848 "data_size": 65536 00:25:37.848 }, 00:25:37.848 { 00:25:37.848 "name": null, 00:25:37.848 "uuid": "b624b66d-4ec1-4499-bf55-8bdc16ead411", 00:25:37.848 "is_configured": false, 00:25:37.848 "data_offset": 0, 00:25:37.848 "data_size": 65536 00:25:37.848 }, 00:25:37.848 { 00:25:37.848 "name": "BaseBdev3", 00:25:37.848 "uuid": "c8c0d618-f65d-4ecb-899c-5e5943710db6", 00:25:37.848 "is_configured": true, 00:25:37.848 "data_offset": 0, 00:25:37.848 "data_size": 65536 00:25:37.848 }, 00:25:37.848 { 00:25:37.848 "name": "BaseBdev4", 00:25:37.848 "uuid": "9ddb1c01-a538-42a6-913e-731863ba8a36", 00:25:37.848 "is_configured": true, 00:25:37.848 "data_offset": 0, 00:25:37.848 "data_size": 65536 00:25:37.848 } 00:25:37.848 ] 00:25:37.848 }' 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:37.848 13:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.414 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:38.414 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.414 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:38.414 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.414 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.414 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:25:38.414 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:25:38.414 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.414 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.414 [2024-11-20 13:45:41.162844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:38.414 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.414 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:38.414 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:38.414 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:38.414 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:38.414 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:38.414 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:38.414 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:38.414 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:38.414 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:38.414 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:38.414 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:38.415 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:38.415 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.415 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.415 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.415 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:38.415 "name": "Existed_Raid", 00:25:38.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.415 "strip_size_kb": 0, 00:25:38.415 "state": "configuring", 00:25:38.415 "raid_level": "raid1", 00:25:38.415 "superblock": false, 00:25:38.415 "num_base_bdevs": 4, 00:25:38.415 "num_base_bdevs_discovered": 2, 00:25:38.415 "num_base_bdevs_operational": 4, 00:25:38.415 "base_bdevs_list": [ 00:25:38.415 { 00:25:38.415 "name": "BaseBdev1", 00:25:38.415 "uuid": "03d98670-75dc-445f-89c5-0b49a15080df", 00:25:38.415 "is_configured": true, 00:25:38.415 "data_offset": 0, 00:25:38.415 "data_size": 65536 00:25:38.415 }, 00:25:38.415 { 00:25:38.415 "name": null, 00:25:38.415 "uuid": "b624b66d-4ec1-4499-bf55-8bdc16ead411", 00:25:38.415 "is_configured": false, 00:25:38.415 "data_offset": 0, 00:25:38.415 "data_size": 65536 00:25:38.415 }, 00:25:38.415 { 00:25:38.415 "name": null, 00:25:38.415 "uuid": "c8c0d618-f65d-4ecb-899c-5e5943710db6", 00:25:38.415 "is_configured": false, 00:25:38.415 "data_offset": 0, 00:25:38.415 "data_size": 65536 00:25:38.415 }, 00:25:38.415 { 00:25:38.415 "name": "BaseBdev4", 00:25:38.415 "uuid": "9ddb1c01-a538-42a6-913e-731863ba8a36", 00:25:38.415 "is_configured": true, 00:25:38.415 "data_offset": 0, 00:25:38.415 "data_size": 65536 00:25:38.415 } 00:25:38.415 ] 00:25:38.415 }' 00:25:38.415 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:38.415 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.981 [2024-11-20 13:45:41.775018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:38.981 "name": "Existed_Raid", 00:25:38.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.981 "strip_size_kb": 0, 00:25:38.981 "state": "configuring", 00:25:38.981 "raid_level": "raid1", 00:25:38.981 "superblock": false, 00:25:38.981 "num_base_bdevs": 4, 00:25:38.981 "num_base_bdevs_discovered": 3, 00:25:38.981 "num_base_bdevs_operational": 4, 00:25:38.981 "base_bdevs_list": [ 00:25:38.981 { 00:25:38.981 "name": "BaseBdev1", 00:25:38.981 "uuid": "03d98670-75dc-445f-89c5-0b49a15080df", 00:25:38.981 "is_configured": true, 00:25:38.981 "data_offset": 0, 00:25:38.981 "data_size": 65536 00:25:38.981 }, 00:25:38.981 { 00:25:38.981 "name": null, 00:25:38.981 "uuid": "b624b66d-4ec1-4499-bf55-8bdc16ead411", 00:25:38.981 "is_configured": false, 00:25:38.981 "data_offset": 0, 00:25:38.981 "data_size": 65536 00:25:38.981 }, 00:25:38.981 { 00:25:38.981 "name": "BaseBdev3", 00:25:38.981 "uuid": "c8c0d618-f65d-4ecb-899c-5e5943710db6", 00:25:38.981 "is_configured": true, 00:25:38.981 "data_offset": 0, 00:25:38.981 "data_size": 65536 00:25:38.981 }, 00:25:38.981 { 00:25:38.981 "name": "BaseBdev4", 00:25:38.981 "uuid": "9ddb1c01-a538-42a6-913e-731863ba8a36", 00:25:38.981 "is_configured": true, 00:25:38.981 "data_offset": 0, 00:25:38.981 "data_size": 65536 00:25:38.981 } 00:25:38.981 ] 00:25:38.981 }' 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:38.981 13:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.549 [2024-11-20 13:45:42.359316] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.549 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.808 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.808 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:39.808 "name": "Existed_Raid", 00:25:39.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.808 "strip_size_kb": 0, 00:25:39.808 "state": "configuring", 00:25:39.808 "raid_level": "raid1", 00:25:39.808 "superblock": false, 00:25:39.808 "num_base_bdevs": 4, 00:25:39.808 "num_base_bdevs_discovered": 2, 00:25:39.808 "num_base_bdevs_operational": 4, 00:25:39.808 "base_bdevs_list": [ 00:25:39.808 { 00:25:39.808 "name": null, 00:25:39.808 "uuid": "03d98670-75dc-445f-89c5-0b49a15080df", 00:25:39.808 "is_configured": false, 00:25:39.808 "data_offset": 0, 00:25:39.808 "data_size": 65536 00:25:39.808 }, 00:25:39.808 { 00:25:39.808 "name": null, 00:25:39.808 "uuid": "b624b66d-4ec1-4499-bf55-8bdc16ead411", 00:25:39.808 "is_configured": false, 00:25:39.808 "data_offset": 0, 00:25:39.808 "data_size": 65536 00:25:39.808 }, 00:25:39.808 { 00:25:39.808 "name": "BaseBdev3", 00:25:39.808 "uuid": "c8c0d618-f65d-4ecb-899c-5e5943710db6", 00:25:39.808 "is_configured": true, 00:25:39.808 "data_offset": 0, 00:25:39.808 "data_size": 65536 00:25:39.808 }, 00:25:39.808 { 00:25:39.808 "name": "BaseBdev4", 00:25:39.808 "uuid": "9ddb1c01-a538-42a6-913e-731863ba8a36", 00:25:39.808 "is_configured": true, 00:25:39.808 "data_offset": 0, 00:25:39.808 "data_size": 65536 00:25:39.808 } 00:25:39.808 ] 00:25:39.808 }' 00:25:39.808 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:39.808 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.068 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:40.068 13:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:40.068 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.068 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.327 13:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.327 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:25:40.327 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:40.327 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.327 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.327 [2024-11-20 13:45:43.027209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:40.327 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.327 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:40.327 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:40.327 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:40.327 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:40.327 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:40.327 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:40.327 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:40.327 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:40.327 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:40.327 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:40.327 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:40.327 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.327 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.327 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:40.327 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.327 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:40.327 "name": "Existed_Raid", 00:25:40.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.327 "strip_size_kb": 0, 00:25:40.327 "state": "configuring", 00:25:40.327 "raid_level": "raid1", 00:25:40.327 "superblock": false, 00:25:40.327 "num_base_bdevs": 4, 00:25:40.327 "num_base_bdevs_discovered": 3, 00:25:40.327 "num_base_bdevs_operational": 4, 00:25:40.327 "base_bdevs_list": [ 00:25:40.327 { 00:25:40.327 "name": null, 00:25:40.327 "uuid": "03d98670-75dc-445f-89c5-0b49a15080df", 00:25:40.327 "is_configured": false, 00:25:40.327 "data_offset": 0, 00:25:40.327 "data_size": 65536 00:25:40.327 }, 00:25:40.327 { 00:25:40.327 "name": "BaseBdev2", 00:25:40.327 "uuid": "b624b66d-4ec1-4499-bf55-8bdc16ead411", 00:25:40.327 "is_configured": true, 00:25:40.327 "data_offset": 0, 00:25:40.327 "data_size": 65536 00:25:40.327 }, 00:25:40.327 { 00:25:40.327 "name": "BaseBdev3", 00:25:40.327 "uuid": "c8c0d618-f65d-4ecb-899c-5e5943710db6", 00:25:40.327 "is_configured": true, 00:25:40.327 "data_offset": 0, 00:25:40.327 "data_size": 65536 00:25:40.327 }, 00:25:40.327 { 00:25:40.327 "name": "BaseBdev4", 00:25:40.327 "uuid": "9ddb1c01-a538-42a6-913e-731863ba8a36", 00:25:40.327 "is_configured": true, 00:25:40.327 "data_offset": 0, 00:25:40.327 "data_size": 65536 00:25:40.327 } 00:25:40.327 ] 00:25:40.327 }' 00:25:40.327 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:40.327 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 03d98670-75dc-445f-89c5-0b49a15080df 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.894 [2024-11-20 13:45:43.703308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:40.894 [2024-11-20 13:45:43.703642] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:40.894 [2024-11-20 13:45:43.703672] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:40.894 [2024-11-20 13:45:43.704088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:25:40.894 [2024-11-20 13:45:43.704331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:40.894 [2024-11-20 13:45:43.704346] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:25:40.894 [2024-11-20 13:45:43.704675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:40.894 NewBaseBdev 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.894 [ 00:25:40.894 { 00:25:40.894 "name": "NewBaseBdev", 00:25:40.894 "aliases": [ 00:25:40.894 "03d98670-75dc-445f-89c5-0b49a15080df" 00:25:40.894 ], 00:25:40.894 "product_name": "Malloc disk", 00:25:40.894 "block_size": 512, 00:25:40.894 "num_blocks": 65536, 00:25:40.894 "uuid": "03d98670-75dc-445f-89c5-0b49a15080df", 00:25:40.894 "assigned_rate_limits": { 00:25:40.894 "rw_ios_per_sec": 0, 00:25:40.894 "rw_mbytes_per_sec": 0, 00:25:40.894 "r_mbytes_per_sec": 0, 00:25:40.894 "w_mbytes_per_sec": 0 00:25:40.894 }, 00:25:40.894 "claimed": true, 00:25:40.894 "claim_type": "exclusive_write", 00:25:40.894 "zoned": false, 00:25:40.894 "supported_io_types": { 00:25:40.894 "read": true, 00:25:40.894 "write": true, 00:25:40.894 "unmap": true, 00:25:40.894 "flush": true, 00:25:40.894 "reset": true, 00:25:40.894 "nvme_admin": false, 00:25:40.894 "nvme_io": false, 00:25:40.894 "nvme_io_md": false, 00:25:40.894 "write_zeroes": true, 00:25:40.894 "zcopy": true, 00:25:40.894 "get_zone_info": false, 00:25:40.894 "zone_management": false, 00:25:40.894 "zone_append": false, 00:25:40.894 "compare": false, 00:25:40.894 "compare_and_write": false, 00:25:40.894 "abort": true, 00:25:40.894 "seek_hole": false, 00:25:40.894 "seek_data": false, 00:25:40.894 "copy": true, 00:25:40.894 "nvme_iov_md": false 00:25:40.894 }, 00:25:40.894 "memory_domains": [ 00:25:40.894 { 00:25:40.894 "dma_device_id": "system", 00:25:40.894 "dma_device_type": 1 00:25:40.894 }, 00:25:40.894 { 00:25:40.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.894 "dma_device_type": 2 00:25:40.894 } 00:25:40.894 ], 00:25:40.894 "driver_specific": {} 00:25:40.894 } 00:25:40.894 ] 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:40.894 "name": "Existed_Raid", 00:25:40.894 "uuid": "78fd31f3-b275-41b6-9e77-6aeb29505924", 00:25:40.894 "strip_size_kb": 0, 00:25:40.894 "state": "online", 00:25:40.894 "raid_level": "raid1", 00:25:40.894 "superblock": false, 00:25:40.894 "num_base_bdevs": 4, 00:25:40.894 "num_base_bdevs_discovered": 4, 00:25:40.894 "num_base_bdevs_operational": 4, 00:25:40.894 "base_bdevs_list": [ 00:25:40.894 { 00:25:40.894 "name": "NewBaseBdev", 00:25:40.894 "uuid": "03d98670-75dc-445f-89c5-0b49a15080df", 00:25:40.894 "is_configured": true, 00:25:40.894 "data_offset": 0, 00:25:40.894 "data_size": 65536 00:25:40.894 }, 00:25:40.894 { 00:25:40.894 "name": "BaseBdev2", 00:25:40.894 "uuid": "b624b66d-4ec1-4499-bf55-8bdc16ead411", 00:25:40.894 "is_configured": true, 00:25:40.894 "data_offset": 0, 00:25:40.894 "data_size": 65536 00:25:40.894 }, 00:25:40.894 { 00:25:40.894 "name": "BaseBdev3", 00:25:40.894 "uuid": "c8c0d618-f65d-4ecb-899c-5e5943710db6", 00:25:40.894 "is_configured": true, 00:25:40.894 "data_offset": 0, 00:25:40.894 "data_size": 65536 00:25:40.894 }, 00:25:40.894 { 00:25:40.894 "name": "BaseBdev4", 00:25:40.894 "uuid": "9ddb1c01-a538-42a6-913e-731863ba8a36", 00:25:40.894 "is_configured": true, 00:25:40.894 "data_offset": 0, 00:25:40.894 "data_size": 65536 00:25:40.894 } 00:25:40.894 ] 00:25:40.894 }' 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:40.894 13:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.461 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:25:41.461 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:41.461 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:41.461 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:41.461 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:41.461 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:41.461 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:41.461 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:41.461 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.461 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.461 [2024-11-20 13:45:44.275967] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:41.461 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.461 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:41.461 "name": "Existed_Raid", 00:25:41.461 "aliases": [ 00:25:41.461 "78fd31f3-b275-41b6-9e77-6aeb29505924" 00:25:41.461 ], 00:25:41.461 "product_name": "Raid Volume", 00:25:41.461 "block_size": 512, 00:25:41.461 "num_blocks": 65536, 00:25:41.461 "uuid": "78fd31f3-b275-41b6-9e77-6aeb29505924", 00:25:41.461 "assigned_rate_limits": { 00:25:41.461 "rw_ios_per_sec": 0, 00:25:41.461 "rw_mbytes_per_sec": 0, 00:25:41.461 "r_mbytes_per_sec": 0, 00:25:41.461 "w_mbytes_per_sec": 0 00:25:41.461 }, 00:25:41.461 "claimed": false, 00:25:41.461 "zoned": false, 00:25:41.461 "supported_io_types": { 00:25:41.461 "read": true, 00:25:41.461 "write": true, 00:25:41.461 "unmap": false, 00:25:41.461 "flush": false, 00:25:41.461 "reset": true, 00:25:41.461 "nvme_admin": false, 00:25:41.461 "nvme_io": false, 00:25:41.461 "nvme_io_md": false, 00:25:41.461 "write_zeroes": true, 00:25:41.461 "zcopy": false, 00:25:41.461 "get_zone_info": false, 00:25:41.461 "zone_management": false, 00:25:41.461 "zone_append": false, 00:25:41.461 "compare": false, 00:25:41.461 "compare_and_write": false, 00:25:41.461 "abort": false, 00:25:41.461 "seek_hole": false, 00:25:41.461 "seek_data": false, 00:25:41.461 "copy": false, 00:25:41.461 "nvme_iov_md": false 00:25:41.461 }, 00:25:41.461 "memory_domains": [ 00:25:41.461 { 00:25:41.461 "dma_device_id": "system", 00:25:41.461 "dma_device_type": 1 00:25:41.461 }, 00:25:41.461 { 00:25:41.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.461 "dma_device_type": 2 00:25:41.461 }, 00:25:41.461 { 00:25:41.461 "dma_device_id": "system", 00:25:41.461 "dma_device_type": 1 00:25:41.461 }, 00:25:41.461 { 00:25:41.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.461 "dma_device_type": 2 00:25:41.461 }, 00:25:41.461 { 00:25:41.461 "dma_device_id": "system", 00:25:41.461 "dma_device_type": 1 00:25:41.461 }, 00:25:41.461 { 00:25:41.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.461 "dma_device_type": 2 00:25:41.461 }, 00:25:41.461 { 00:25:41.461 "dma_device_id": "system", 00:25:41.461 "dma_device_type": 1 00:25:41.461 }, 00:25:41.461 { 00:25:41.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.461 "dma_device_type": 2 00:25:41.461 } 00:25:41.461 ], 00:25:41.461 "driver_specific": { 00:25:41.461 "raid": { 00:25:41.461 "uuid": "78fd31f3-b275-41b6-9e77-6aeb29505924", 00:25:41.461 "strip_size_kb": 0, 00:25:41.461 "state": "online", 00:25:41.461 "raid_level": "raid1", 00:25:41.461 "superblock": false, 00:25:41.461 "num_base_bdevs": 4, 00:25:41.461 "num_base_bdevs_discovered": 4, 00:25:41.461 "num_base_bdevs_operational": 4, 00:25:41.461 "base_bdevs_list": [ 00:25:41.461 { 00:25:41.461 "name": "NewBaseBdev", 00:25:41.461 "uuid": "03d98670-75dc-445f-89c5-0b49a15080df", 00:25:41.461 "is_configured": true, 00:25:41.461 "data_offset": 0, 00:25:41.461 "data_size": 65536 00:25:41.461 }, 00:25:41.461 { 00:25:41.461 "name": "BaseBdev2", 00:25:41.462 "uuid": "b624b66d-4ec1-4499-bf55-8bdc16ead411", 00:25:41.462 "is_configured": true, 00:25:41.462 "data_offset": 0, 00:25:41.462 "data_size": 65536 00:25:41.462 }, 00:25:41.462 { 00:25:41.462 "name": "BaseBdev3", 00:25:41.462 "uuid": "c8c0d618-f65d-4ecb-899c-5e5943710db6", 00:25:41.462 "is_configured": true, 00:25:41.462 "data_offset": 0, 00:25:41.462 "data_size": 65536 00:25:41.462 }, 00:25:41.462 { 00:25:41.462 "name": "BaseBdev4", 00:25:41.462 "uuid": "9ddb1c01-a538-42a6-913e-731863ba8a36", 00:25:41.462 "is_configured": true, 00:25:41.462 "data_offset": 0, 00:25:41.462 "data_size": 65536 00:25:41.462 } 00:25:41.462 ] 00:25:41.462 } 00:25:41.462 } 00:25:41.462 }' 00:25:41.462 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:25:41.729 BaseBdev2 00:25:41.729 BaseBdev3 00:25:41.729 BaseBdev4' 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.729 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.988 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:41.988 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:41.988 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:41.988 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.988 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.988 [2024-11-20 13:45:44.651569] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:41.988 [2024-11-20 13:45:44.651604] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:41.988 [2024-11-20 13:45:44.651721] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:41.988 [2024-11-20 13:45:44.652103] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:41.988 [2024-11-20 13:45:44.652127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:25:41.988 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.988 13:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73524 00:25:41.988 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73524 ']' 00:25:41.988 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73524 00:25:41.988 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:25:41.988 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:41.988 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73524 00:25:41.988 killing process with pid 73524 00:25:41.988 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:41.988 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:41.988 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73524' 00:25:41.988 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73524 00:25:41.988 [2024-11-20 13:45:44.692188] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:41.988 13:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73524 00:25:42.247 [2024-11-20 13:45:45.052248] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:25:43.623 00:25:43.623 real 0m13.275s 00:25:43.623 user 0m21.997s 00:25:43.623 sys 0m1.886s 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:43.623 ************************************ 00:25:43.623 END TEST raid_state_function_test 00:25:43.623 ************************************ 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.623 13:45:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:25:43.623 13:45:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:43.623 13:45:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:43.623 13:45:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:43.623 ************************************ 00:25:43.623 START TEST raid_state_function_test_sb 00:25:43.623 ************************************ 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:25:43.623 Process raid pid: 74207 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74207 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74207' 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74207 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74207 ']' 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:43.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:43.623 13:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:43.623 [2024-11-20 13:45:46.311732] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:25:43.623 [2024-11-20 13:45:46.312091] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:43.623 [2024-11-20 13:45:46.495903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.881 [2024-11-20 13:45:46.629774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.140 [2024-11-20 13:45:46.842328] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:44.140 [2024-11-20 13:45:46.842399] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:44.398 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:44.398 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:25:44.398 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:44.398 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.398 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:44.398 [2024-11-20 13:45:47.302880] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:44.398 [2024-11-20 13:45:47.302961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:44.398 [2024-11-20 13:45:47.302979] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:44.398 [2024-11-20 13:45:47.302996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:44.398 [2024-11-20 13:45:47.303006] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:44.398 [2024-11-20 13:45:47.303021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:44.398 [2024-11-20 13:45:47.303036] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:44.398 [2024-11-20 13:45:47.303050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:44.398 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.398 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:44.398 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:44.398 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:44.398 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:44.398 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:44.398 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:44.398 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:44.398 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:44.398 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:44.398 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:44.660 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:44.660 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.660 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:44.660 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:44.660 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.660 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:44.660 "name": "Existed_Raid", 00:25:44.660 "uuid": "000b737c-1209-4b93-97ff-819fcd47e9db", 00:25:44.660 "strip_size_kb": 0, 00:25:44.660 "state": "configuring", 00:25:44.660 "raid_level": "raid1", 00:25:44.660 "superblock": true, 00:25:44.660 "num_base_bdevs": 4, 00:25:44.660 "num_base_bdevs_discovered": 0, 00:25:44.660 "num_base_bdevs_operational": 4, 00:25:44.660 "base_bdevs_list": [ 00:25:44.660 { 00:25:44.660 "name": "BaseBdev1", 00:25:44.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.660 "is_configured": false, 00:25:44.660 "data_offset": 0, 00:25:44.660 "data_size": 0 00:25:44.660 }, 00:25:44.660 { 00:25:44.660 "name": "BaseBdev2", 00:25:44.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.660 "is_configured": false, 00:25:44.660 "data_offset": 0, 00:25:44.660 "data_size": 0 00:25:44.660 }, 00:25:44.660 { 00:25:44.660 "name": "BaseBdev3", 00:25:44.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.660 "is_configured": false, 00:25:44.660 "data_offset": 0, 00:25:44.660 "data_size": 0 00:25:44.660 }, 00:25:44.660 { 00:25:44.660 "name": "BaseBdev4", 00:25:44.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.660 "is_configured": false, 00:25:44.660 "data_offset": 0, 00:25:44.660 "data_size": 0 00:25:44.660 } 00:25:44.660 ] 00:25:44.660 }' 00:25:44.660 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:44.661 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:44.933 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:44.933 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.933 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:44.933 [2024-11-20 13:45:47.822947] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:44.933 [2024-11-20 13:45:47.822996] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:44.933 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.933 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:44.933 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.933 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:44.933 [2024-11-20 13:45:47.830959] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:44.933 [2024-11-20 13:45:47.831052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:44.933 [2024-11-20 13:45:47.831068] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:44.933 [2024-11-20 13:45:47.831084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:44.933 [2024-11-20 13:45:47.831094] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:44.933 [2024-11-20 13:45:47.831108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:44.933 [2024-11-20 13:45:47.831117] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:44.933 [2024-11-20 13:45:47.831131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:44.933 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.933 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:44.933 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.933 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.192 [2024-11-20 13:45:47.877177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:45.192 BaseBdev1 00:25:45.192 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.192 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:25:45.192 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:45.192 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:45.192 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:45.192 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:45.192 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:45.192 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:45.192 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.192 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.192 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.192 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:45.192 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.192 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.192 [ 00:25:45.192 { 00:25:45.192 "name": "BaseBdev1", 00:25:45.192 "aliases": [ 00:25:45.192 "1472d064-a0a0-4afd-adc4-d9572ede3fe6" 00:25:45.192 ], 00:25:45.192 "product_name": "Malloc disk", 00:25:45.192 "block_size": 512, 00:25:45.192 "num_blocks": 65536, 00:25:45.192 "uuid": "1472d064-a0a0-4afd-adc4-d9572ede3fe6", 00:25:45.192 "assigned_rate_limits": { 00:25:45.192 "rw_ios_per_sec": 0, 00:25:45.192 "rw_mbytes_per_sec": 0, 00:25:45.192 "r_mbytes_per_sec": 0, 00:25:45.192 "w_mbytes_per_sec": 0 00:25:45.192 }, 00:25:45.192 "claimed": true, 00:25:45.192 "claim_type": "exclusive_write", 00:25:45.192 "zoned": false, 00:25:45.192 "supported_io_types": { 00:25:45.192 "read": true, 00:25:45.192 "write": true, 00:25:45.192 "unmap": true, 00:25:45.192 "flush": true, 00:25:45.192 "reset": true, 00:25:45.192 "nvme_admin": false, 00:25:45.193 "nvme_io": false, 00:25:45.193 "nvme_io_md": false, 00:25:45.193 "write_zeroes": true, 00:25:45.193 "zcopy": true, 00:25:45.193 "get_zone_info": false, 00:25:45.193 "zone_management": false, 00:25:45.193 "zone_append": false, 00:25:45.193 "compare": false, 00:25:45.193 "compare_and_write": false, 00:25:45.193 "abort": true, 00:25:45.193 "seek_hole": false, 00:25:45.193 "seek_data": false, 00:25:45.193 "copy": true, 00:25:45.193 "nvme_iov_md": false 00:25:45.193 }, 00:25:45.193 "memory_domains": [ 00:25:45.193 { 00:25:45.193 "dma_device_id": "system", 00:25:45.193 "dma_device_type": 1 00:25:45.193 }, 00:25:45.193 { 00:25:45.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:45.193 "dma_device_type": 2 00:25:45.193 } 00:25:45.193 ], 00:25:45.193 "driver_specific": {} 00:25:45.193 } 00:25:45.193 ] 00:25:45.193 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.193 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:45.193 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:45.193 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:45.193 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:45.193 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:45.193 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:45.193 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:45.193 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:45.193 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:45.193 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:45.193 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:45.193 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:45.193 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.193 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:45.193 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.193 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.193 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:45.193 "name": "Existed_Raid", 00:25:45.193 "uuid": "68cd55c5-bddb-4e7e-a5e4-d135fa3d1300", 00:25:45.193 "strip_size_kb": 0, 00:25:45.193 "state": "configuring", 00:25:45.193 "raid_level": "raid1", 00:25:45.193 "superblock": true, 00:25:45.193 "num_base_bdevs": 4, 00:25:45.193 "num_base_bdevs_discovered": 1, 00:25:45.193 "num_base_bdevs_operational": 4, 00:25:45.193 "base_bdevs_list": [ 00:25:45.193 { 00:25:45.193 "name": "BaseBdev1", 00:25:45.193 "uuid": "1472d064-a0a0-4afd-adc4-d9572ede3fe6", 00:25:45.193 "is_configured": true, 00:25:45.193 "data_offset": 2048, 00:25:45.193 "data_size": 63488 00:25:45.193 }, 00:25:45.193 { 00:25:45.193 "name": "BaseBdev2", 00:25:45.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.193 "is_configured": false, 00:25:45.193 "data_offset": 0, 00:25:45.193 "data_size": 0 00:25:45.193 }, 00:25:45.193 { 00:25:45.193 "name": "BaseBdev3", 00:25:45.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.193 "is_configured": false, 00:25:45.193 "data_offset": 0, 00:25:45.193 "data_size": 0 00:25:45.193 }, 00:25:45.193 { 00:25:45.193 "name": "BaseBdev4", 00:25:45.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.193 "is_configured": false, 00:25:45.193 "data_offset": 0, 00:25:45.193 "data_size": 0 00:25:45.193 } 00:25:45.193 ] 00:25:45.193 }' 00:25:45.193 13:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:45.193 13:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.762 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:45.762 13:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.762 13:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.762 [2024-11-20 13:45:48.457414] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:45.762 [2024-11-20 13:45:48.457483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:45.762 13:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.762 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:45.762 13:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.762 13:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.762 [2024-11-20 13:45:48.465491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:45.763 [2024-11-20 13:45:48.468128] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:45.763 [2024-11-20 13:45:48.468187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:45.763 [2024-11-20 13:45:48.468204] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:45.763 [2024-11-20 13:45:48.468222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:45.763 [2024-11-20 13:45:48.468232] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:45.763 [2024-11-20 13:45:48.468245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:45.763 13:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.763 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:25:45.763 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:45.763 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:45.763 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:45.763 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:45.763 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:45.763 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:45.763 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:45.763 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:45.763 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:45.763 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:45.763 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:45.763 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:45.763 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:45.763 13:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.763 13:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.763 13:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.763 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:45.763 "name": "Existed_Raid", 00:25:45.763 "uuid": "cbdb4d66-906c-4a40-b030-374a917cb04c", 00:25:45.763 "strip_size_kb": 0, 00:25:45.763 "state": "configuring", 00:25:45.763 "raid_level": "raid1", 00:25:45.763 "superblock": true, 00:25:45.763 "num_base_bdevs": 4, 00:25:45.763 "num_base_bdevs_discovered": 1, 00:25:45.763 "num_base_bdevs_operational": 4, 00:25:45.763 "base_bdevs_list": [ 00:25:45.763 { 00:25:45.763 "name": "BaseBdev1", 00:25:45.763 "uuid": "1472d064-a0a0-4afd-adc4-d9572ede3fe6", 00:25:45.763 "is_configured": true, 00:25:45.763 "data_offset": 2048, 00:25:45.763 "data_size": 63488 00:25:45.763 }, 00:25:45.763 { 00:25:45.763 "name": "BaseBdev2", 00:25:45.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.763 "is_configured": false, 00:25:45.763 "data_offset": 0, 00:25:45.763 "data_size": 0 00:25:45.763 }, 00:25:45.763 { 00:25:45.763 "name": "BaseBdev3", 00:25:45.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.763 "is_configured": false, 00:25:45.763 "data_offset": 0, 00:25:45.763 "data_size": 0 00:25:45.763 }, 00:25:45.763 { 00:25:45.763 "name": "BaseBdev4", 00:25:45.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.763 "is_configured": false, 00:25:45.763 "data_offset": 0, 00:25:45.763 "data_size": 0 00:25:45.763 } 00:25:45.763 ] 00:25:45.763 }' 00:25:45.763 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:45.763 13:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:46.331 13:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:46.331 13:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.331 13:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:46.331 [2024-11-20 13:45:49.025272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:46.331 BaseBdev2 00:25:46.331 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.331 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:25:46.331 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:46.331 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:46.331 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:46.331 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:46.331 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:46.331 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:46.331 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.331 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:46.331 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.331 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:46.331 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.331 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:46.331 [ 00:25:46.331 { 00:25:46.331 "name": "BaseBdev2", 00:25:46.331 "aliases": [ 00:25:46.331 "a1faa224-d631-43ec-9b81-382eca321f1e" 00:25:46.331 ], 00:25:46.331 "product_name": "Malloc disk", 00:25:46.331 "block_size": 512, 00:25:46.331 "num_blocks": 65536, 00:25:46.331 "uuid": "a1faa224-d631-43ec-9b81-382eca321f1e", 00:25:46.331 "assigned_rate_limits": { 00:25:46.331 "rw_ios_per_sec": 0, 00:25:46.331 "rw_mbytes_per_sec": 0, 00:25:46.331 "r_mbytes_per_sec": 0, 00:25:46.331 "w_mbytes_per_sec": 0 00:25:46.332 }, 00:25:46.332 "claimed": true, 00:25:46.332 "claim_type": "exclusive_write", 00:25:46.332 "zoned": false, 00:25:46.332 "supported_io_types": { 00:25:46.332 "read": true, 00:25:46.332 "write": true, 00:25:46.332 "unmap": true, 00:25:46.332 "flush": true, 00:25:46.332 "reset": true, 00:25:46.332 "nvme_admin": false, 00:25:46.332 "nvme_io": false, 00:25:46.332 "nvme_io_md": false, 00:25:46.332 "write_zeroes": true, 00:25:46.332 "zcopy": true, 00:25:46.332 "get_zone_info": false, 00:25:46.332 "zone_management": false, 00:25:46.332 "zone_append": false, 00:25:46.332 "compare": false, 00:25:46.332 "compare_and_write": false, 00:25:46.332 "abort": true, 00:25:46.332 "seek_hole": false, 00:25:46.332 "seek_data": false, 00:25:46.332 "copy": true, 00:25:46.332 "nvme_iov_md": false 00:25:46.332 }, 00:25:46.332 "memory_domains": [ 00:25:46.332 { 00:25:46.332 "dma_device_id": "system", 00:25:46.332 "dma_device_type": 1 00:25:46.332 }, 00:25:46.332 { 00:25:46.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:46.332 "dma_device_type": 2 00:25:46.332 } 00:25:46.332 ], 00:25:46.332 "driver_specific": {} 00:25:46.332 } 00:25:46.332 ] 00:25:46.332 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.332 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:46.332 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:46.332 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:46.332 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:46.332 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:46.332 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:46.332 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:46.332 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:46.332 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:46.332 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:46.332 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:46.332 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:46.332 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:46.332 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:46.332 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.332 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:46.332 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:46.332 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.332 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:46.332 "name": "Existed_Raid", 00:25:46.332 "uuid": "cbdb4d66-906c-4a40-b030-374a917cb04c", 00:25:46.332 "strip_size_kb": 0, 00:25:46.332 "state": "configuring", 00:25:46.332 "raid_level": "raid1", 00:25:46.332 "superblock": true, 00:25:46.332 "num_base_bdevs": 4, 00:25:46.332 "num_base_bdevs_discovered": 2, 00:25:46.332 "num_base_bdevs_operational": 4, 00:25:46.332 "base_bdevs_list": [ 00:25:46.332 { 00:25:46.332 "name": "BaseBdev1", 00:25:46.332 "uuid": "1472d064-a0a0-4afd-adc4-d9572ede3fe6", 00:25:46.332 "is_configured": true, 00:25:46.332 "data_offset": 2048, 00:25:46.332 "data_size": 63488 00:25:46.332 }, 00:25:46.332 { 00:25:46.332 "name": "BaseBdev2", 00:25:46.332 "uuid": "a1faa224-d631-43ec-9b81-382eca321f1e", 00:25:46.332 "is_configured": true, 00:25:46.332 "data_offset": 2048, 00:25:46.332 "data_size": 63488 00:25:46.332 }, 00:25:46.332 { 00:25:46.332 "name": "BaseBdev3", 00:25:46.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.332 "is_configured": false, 00:25:46.332 "data_offset": 0, 00:25:46.332 "data_size": 0 00:25:46.332 }, 00:25:46.332 { 00:25:46.332 "name": "BaseBdev4", 00:25:46.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.332 "is_configured": false, 00:25:46.332 "data_offset": 0, 00:25:46.332 "data_size": 0 00:25:46.332 } 00:25:46.332 ] 00:25:46.332 }' 00:25:46.332 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:46.332 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:46.897 [2024-11-20 13:45:49.637261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:46.897 BaseBdev3 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:46.897 [ 00:25:46.897 { 00:25:46.897 "name": "BaseBdev3", 00:25:46.897 "aliases": [ 00:25:46.897 "eb9e7380-c223-4565-8998-f0ac063c2350" 00:25:46.897 ], 00:25:46.897 "product_name": "Malloc disk", 00:25:46.897 "block_size": 512, 00:25:46.897 "num_blocks": 65536, 00:25:46.897 "uuid": "eb9e7380-c223-4565-8998-f0ac063c2350", 00:25:46.897 "assigned_rate_limits": { 00:25:46.897 "rw_ios_per_sec": 0, 00:25:46.897 "rw_mbytes_per_sec": 0, 00:25:46.897 "r_mbytes_per_sec": 0, 00:25:46.897 "w_mbytes_per_sec": 0 00:25:46.897 }, 00:25:46.897 "claimed": true, 00:25:46.897 "claim_type": "exclusive_write", 00:25:46.897 "zoned": false, 00:25:46.897 "supported_io_types": { 00:25:46.897 "read": true, 00:25:46.897 "write": true, 00:25:46.897 "unmap": true, 00:25:46.897 "flush": true, 00:25:46.897 "reset": true, 00:25:46.897 "nvme_admin": false, 00:25:46.897 "nvme_io": false, 00:25:46.897 "nvme_io_md": false, 00:25:46.897 "write_zeroes": true, 00:25:46.897 "zcopy": true, 00:25:46.897 "get_zone_info": false, 00:25:46.897 "zone_management": false, 00:25:46.897 "zone_append": false, 00:25:46.897 "compare": false, 00:25:46.897 "compare_and_write": false, 00:25:46.897 "abort": true, 00:25:46.897 "seek_hole": false, 00:25:46.897 "seek_data": false, 00:25:46.897 "copy": true, 00:25:46.897 "nvme_iov_md": false 00:25:46.897 }, 00:25:46.897 "memory_domains": [ 00:25:46.897 { 00:25:46.897 "dma_device_id": "system", 00:25:46.897 "dma_device_type": 1 00:25:46.897 }, 00:25:46.897 { 00:25:46.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:46.897 "dma_device_type": 2 00:25:46.897 } 00:25:46.897 ], 00:25:46.897 "driver_specific": {} 00:25:46.897 } 00:25:46.897 ] 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:46.897 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:46.898 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:46.898 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.898 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:46.898 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:46.898 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.898 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:46.898 "name": "Existed_Raid", 00:25:46.898 "uuid": "cbdb4d66-906c-4a40-b030-374a917cb04c", 00:25:46.898 "strip_size_kb": 0, 00:25:46.898 "state": "configuring", 00:25:46.898 "raid_level": "raid1", 00:25:46.898 "superblock": true, 00:25:46.898 "num_base_bdevs": 4, 00:25:46.898 "num_base_bdevs_discovered": 3, 00:25:46.898 "num_base_bdevs_operational": 4, 00:25:46.898 "base_bdevs_list": [ 00:25:46.898 { 00:25:46.898 "name": "BaseBdev1", 00:25:46.898 "uuid": "1472d064-a0a0-4afd-adc4-d9572ede3fe6", 00:25:46.898 "is_configured": true, 00:25:46.898 "data_offset": 2048, 00:25:46.898 "data_size": 63488 00:25:46.898 }, 00:25:46.898 { 00:25:46.898 "name": "BaseBdev2", 00:25:46.898 "uuid": "a1faa224-d631-43ec-9b81-382eca321f1e", 00:25:46.898 "is_configured": true, 00:25:46.898 "data_offset": 2048, 00:25:46.898 "data_size": 63488 00:25:46.898 }, 00:25:46.898 { 00:25:46.898 "name": "BaseBdev3", 00:25:46.898 "uuid": "eb9e7380-c223-4565-8998-f0ac063c2350", 00:25:46.898 "is_configured": true, 00:25:46.898 "data_offset": 2048, 00:25:46.898 "data_size": 63488 00:25:46.898 }, 00:25:46.898 { 00:25:46.898 "name": "BaseBdev4", 00:25:46.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.898 "is_configured": false, 00:25:46.898 "data_offset": 0, 00:25:46.898 "data_size": 0 00:25:46.898 } 00:25:46.898 ] 00:25:46.898 }' 00:25:46.898 13:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:46.898 13:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:47.464 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:25:47.464 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.464 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:47.464 [2024-11-20 13:45:50.268744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:47.464 [2024-11-20 13:45:50.269328] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:47.465 [2024-11-20 13:45:50.269469] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:47.465 BaseBdev4 00:25:47.465 [2024-11-20 13:45:50.269864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:47.465 [2024-11-20 13:45:50.270093] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:47.465 [2024-11-20 13:45:50.270116] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:47.465 [2024-11-20 13:45:50.270300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:47.465 [ 00:25:47.465 { 00:25:47.465 "name": "BaseBdev4", 00:25:47.465 "aliases": [ 00:25:47.465 "b1e3683b-4137-4cd0-8a8e-7f5ee385d17a" 00:25:47.465 ], 00:25:47.465 "product_name": "Malloc disk", 00:25:47.465 "block_size": 512, 00:25:47.465 "num_blocks": 65536, 00:25:47.465 "uuid": "b1e3683b-4137-4cd0-8a8e-7f5ee385d17a", 00:25:47.465 "assigned_rate_limits": { 00:25:47.465 "rw_ios_per_sec": 0, 00:25:47.465 "rw_mbytes_per_sec": 0, 00:25:47.465 "r_mbytes_per_sec": 0, 00:25:47.465 "w_mbytes_per_sec": 0 00:25:47.465 }, 00:25:47.465 "claimed": true, 00:25:47.465 "claim_type": "exclusive_write", 00:25:47.465 "zoned": false, 00:25:47.465 "supported_io_types": { 00:25:47.465 "read": true, 00:25:47.465 "write": true, 00:25:47.465 "unmap": true, 00:25:47.465 "flush": true, 00:25:47.465 "reset": true, 00:25:47.465 "nvme_admin": false, 00:25:47.465 "nvme_io": false, 00:25:47.465 "nvme_io_md": false, 00:25:47.465 "write_zeroes": true, 00:25:47.465 "zcopy": true, 00:25:47.465 "get_zone_info": false, 00:25:47.465 "zone_management": false, 00:25:47.465 "zone_append": false, 00:25:47.465 "compare": false, 00:25:47.465 "compare_and_write": false, 00:25:47.465 "abort": true, 00:25:47.465 "seek_hole": false, 00:25:47.465 "seek_data": false, 00:25:47.465 "copy": true, 00:25:47.465 "nvme_iov_md": false 00:25:47.465 }, 00:25:47.465 "memory_domains": [ 00:25:47.465 { 00:25:47.465 "dma_device_id": "system", 00:25:47.465 "dma_device_type": 1 00:25:47.465 }, 00:25:47.465 { 00:25:47.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:47.465 "dma_device_type": 2 00:25:47.465 } 00:25:47.465 ], 00:25:47.465 "driver_specific": {} 00:25:47.465 } 00:25:47.465 ] 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:47.465 "name": "Existed_Raid", 00:25:47.465 "uuid": "cbdb4d66-906c-4a40-b030-374a917cb04c", 00:25:47.465 "strip_size_kb": 0, 00:25:47.465 "state": "online", 00:25:47.465 "raid_level": "raid1", 00:25:47.465 "superblock": true, 00:25:47.465 "num_base_bdevs": 4, 00:25:47.465 "num_base_bdevs_discovered": 4, 00:25:47.465 "num_base_bdevs_operational": 4, 00:25:47.465 "base_bdevs_list": [ 00:25:47.465 { 00:25:47.465 "name": "BaseBdev1", 00:25:47.465 "uuid": "1472d064-a0a0-4afd-adc4-d9572ede3fe6", 00:25:47.465 "is_configured": true, 00:25:47.465 "data_offset": 2048, 00:25:47.465 "data_size": 63488 00:25:47.465 }, 00:25:47.465 { 00:25:47.465 "name": "BaseBdev2", 00:25:47.465 "uuid": "a1faa224-d631-43ec-9b81-382eca321f1e", 00:25:47.465 "is_configured": true, 00:25:47.465 "data_offset": 2048, 00:25:47.465 "data_size": 63488 00:25:47.465 }, 00:25:47.465 { 00:25:47.465 "name": "BaseBdev3", 00:25:47.465 "uuid": "eb9e7380-c223-4565-8998-f0ac063c2350", 00:25:47.465 "is_configured": true, 00:25:47.465 "data_offset": 2048, 00:25:47.465 "data_size": 63488 00:25:47.465 }, 00:25:47.465 { 00:25:47.465 "name": "BaseBdev4", 00:25:47.465 "uuid": "b1e3683b-4137-4cd0-8a8e-7f5ee385d17a", 00:25:47.465 "is_configured": true, 00:25:47.465 "data_offset": 2048, 00:25:47.465 "data_size": 63488 00:25:47.465 } 00:25:47.465 ] 00:25:47.465 }' 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:47.465 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.035 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:25:48.035 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:48.035 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:48.035 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:48.035 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:25:48.035 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:48.035 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:48.035 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:48.035 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.035 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.035 [2024-11-20 13:45:50.861462] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:48.035 13:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.035 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:48.035 "name": "Existed_Raid", 00:25:48.035 "aliases": [ 00:25:48.035 "cbdb4d66-906c-4a40-b030-374a917cb04c" 00:25:48.035 ], 00:25:48.035 "product_name": "Raid Volume", 00:25:48.035 "block_size": 512, 00:25:48.035 "num_blocks": 63488, 00:25:48.035 "uuid": "cbdb4d66-906c-4a40-b030-374a917cb04c", 00:25:48.035 "assigned_rate_limits": { 00:25:48.035 "rw_ios_per_sec": 0, 00:25:48.035 "rw_mbytes_per_sec": 0, 00:25:48.035 "r_mbytes_per_sec": 0, 00:25:48.035 "w_mbytes_per_sec": 0 00:25:48.035 }, 00:25:48.035 "claimed": false, 00:25:48.035 "zoned": false, 00:25:48.035 "supported_io_types": { 00:25:48.035 "read": true, 00:25:48.035 "write": true, 00:25:48.035 "unmap": false, 00:25:48.035 "flush": false, 00:25:48.035 "reset": true, 00:25:48.035 "nvme_admin": false, 00:25:48.035 "nvme_io": false, 00:25:48.035 "nvme_io_md": false, 00:25:48.035 "write_zeroes": true, 00:25:48.035 "zcopy": false, 00:25:48.035 "get_zone_info": false, 00:25:48.035 "zone_management": false, 00:25:48.035 "zone_append": false, 00:25:48.035 "compare": false, 00:25:48.035 "compare_and_write": false, 00:25:48.035 "abort": false, 00:25:48.035 "seek_hole": false, 00:25:48.035 "seek_data": false, 00:25:48.035 "copy": false, 00:25:48.035 "nvme_iov_md": false 00:25:48.035 }, 00:25:48.035 "memory_domains": [ 00:25:48.035 { 00:25:48.035 "dma_device_id": "system", 00:25:48.035 "dma_device_type": 1 00:25:48.035 }, 00:25:48.035 { 00:25:48.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:48.035 "dma_device_type": 2 00:25:48.035 }, 00:25:48.035 { 00:25:48.035 "dma_device_id": "system", 00:25:48.035 "dma_device_type": 1 00:25:48.035 }, 00:25:48.035 { 00:25:48.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:48.035 "dma_device_type": 2 00:25:48.035 }, 00:25:48.035 { 00:25:48.035 "dma_device_id": "system", 00:25:48.035 "dma_device_type": 1 00:25:48.035 }, 00:25:48.035 { 00:25:48.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:48.035 "dma_device_type": 2 00:25:48.035 }, 00:25:48.035 { 00:25:48.035 "dma_device_id": "system", 00:25:48.035 "dma_device_type": 1 00:25:48.035 }, 00:25:48.035 { 00:25:48.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:48.035 "dma_device_type": 2 00:25:48.035 } 00:25:48.035 ], 00:25:48.035 "driver_specific": { 00:25:48.035 "raid": { 00:25:48.035 "uuid": "cbdb4d66-906c-4a40-b030-374a917cb04c", 00:25:48.035 "strip_size_kb": 0, 00:25:48.035 "state": "online", 00:25:48.035 "raid_level": "raid1", 00:25:48.035 "superblock": true, 00:25:48.035 "num_base_bdevs": 4, 00:25:48.035 "num_base_bdevs_discovered": 4, 00:25:48.035 "num_base_bdevs_operational": 4, 00:25:48.035 "base_bdevs_list": [ 00:25:48.035 { 00:25:48.035 "name": "BaseBdev1", 00:25:48.035 "uuid": "1472d064-a0a0-4afd-adc4-d9572ede3fe6", 00:25:48.035 "is_configured": true, 00:25:48.035 "data_offset": 2048, 00:25:48.035 "data_size": 63488 00:25:48.035 }, 00:25:48.035 { 00:25:48.035 "name": "BaseBdev2", 00:25:48.035 "uuid": "a1faa224-d631-43ec-9b81-382eca321f1e", 00:25:48.035 "is_configured": true, 00:25:48.035 "data_offset": 2048, 00:25:48.035 "data_size": 63488 00:25:48.035 }, 00:25:48.035 { 00:25:48.035 "name": "BaseBdev3", 00:25:48.035 "uuid": "eb9e7380-c223-4565-8998-f0ac063c2350", 00:25:48.035 "is_configured": true, 00:25:48.035 "data_offset": 2048, 00:25:48.035 "data_size": 63488 00:25:48.035 }, 00:25:48.035 { 00:25:48.035 "name": "BaseBdev4", 00:25:48.035 "uuid": "b1e3683b-4137-4cd0-8a8e-7f5ee385d17a", 00:25:48.035 "is_configured": true, 00:25:48.035 "data_offset": 2048, 00:25:48.035 "data_size": 63488 00:25:48.035 } 00:25:48.035 ] 00:25:48.035 } 00:25:48.035 } 00:25:48.035 }' 00:25:48.035 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:48.294 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:25:48.294 BaseBdev2 00:25:48.294 BaseBdev3 00:25:48.294 BaseBdev4' 00:25:48.294 13:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.294 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.553 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:48.553 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:48.553 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:48.553 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.554 [2024-11-20 13:45:51.241243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:48.554 "name": "Existed_Raid", 00:25:48.554 "uuid": "cbdb4d66-906c-4a40-b030-374a917cb04c", 00:25:48.554 "strip_size_kb": 0, 00:25:48.554 "state": "online", 00:25:48.554 "raid_level": "raid1", 00:25:48.554 "superblock": true, 00:25:48.554 "num_base_bdevs": 4, 00:25:48.554 "num_base_bdevs_discovered": 3, 00:25:48.554 "num_base_bdevs_operational": 3, 00:25:48.554 "base_bdevs_list": [ 00:25:48.554 { 00:25:48.554 "name": null, 00:25:48.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.554 "is_configured": false, 00:25:48.554 "data_offset": 0, 00:25:48.554 "data_size": 63488 00:25:48.554 }, 00:25:48.554 { 00:25:48.554 "name": "BaseBdev2", 00:25:48.554 "uuid": "a1faa224-d631-43ec-9b81-382eca321f1e", 00:25:48.554 "is_configured": true, 00:25:48.554 "data_offset": 2048, 00:25:48.554 "data_size": 63488 00:25:48.554 }, 00:25:48.554 { 00:25:48.554 "name": "BaseBdev3", 00:25:48.554 "uuid": "eb9e7380-c223-4565-8998-f0ac063c2350", 00:25:48.554 "is_configured": true, 00:25:48.554 "data_offset": 2048, 00:25:48.554 "data_size": 63488 00:25:48.554 }, 00:25:48.554 { 00:25:48.554 "name": "BaseBdev4", 00:25:48.554 "uuid": "b1e3683b-4137-4cd0-8a8e-7f5ee385d17a", 00:25:48.554 "is_configured": true, 00:25:48.554 "data_offset": 2048, 00:25:48.554 "data_size": 63488 00:25:48.554 } 00:25:48.554 ] 00:25:48.554 }' 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:48.554 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.121 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:25:49.121 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:49.121 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.121 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:49.121 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.121 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.121 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.121 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:49.121 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:49.121 13:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:25:49.121 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.121 13:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.121 [2024-11-20 13:45:51.939543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:49.121 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.121 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:49.121 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:49.121 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.121 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.121 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:49.121 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.380 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.380 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:49.380 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:49.380 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:25:49.380 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.380 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.380 [2024-11-20 13:45:52.088746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:49.380 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.380 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:49.380 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:49.380 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.380 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.380 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.380 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:25:49.380 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.380 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:25:49.380 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:49.380 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:25:49.380 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.380 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.380 [2024-11-20 13:45:52.240728] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:49.380 [2024-11-20 13:45:52.240915] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:49.641 [2024-11-20 13:45:52.330627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:49.641 [2024-11-20 13:45:52.330708] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:49.641 [2024-11-20 13:45:52.330728] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.641 BaseBdev2 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.641 [ 00:25:49.641 { 00:25:49.641 "name": "BaseBdev2", 00:25:49.641 "aliases": [ 00:25:49.641 "8378e3b9-a239-44a5-b134-aa77c4b7cf4b" 00:25:49.641 ], 00:25:49.641 "product_name": "Malloc disk", 00:25:49.641 "block_size": 512, 00:25:49.641 "num_blocks": 65536, 00:25:49.641 "uuid": "8378e3b9-a239-44a5-b134-aa77c4b7cf4b", 00:25:49.641 "assigned_rate_limits": { 00:25:49.641 "rw_ios_per_sec": 0, 00:25:49.641 "rw_mbytes_per_sec": 0, 00:25:49.641 "r_mbytes_per_sec": 0, 00:25:49.641 "w_mbytes_per_sec": 0 00:25:49.641 }, 00:25:49.641 "claimed": false, 00:25:49.641 "zoned": false, 00:25:49.641 "supported_io_types": { 00:25:49.641 "read": true, 00:25:49.641 "write": true, 00:25:49.641 "unmap": true, 00:25:49.641 "flush": true, 00:25:49.641 "reset": true, 00:25:49.641 "nvme_admin": false, 00:25:49.641 "nvme_io": false, 00:25:49.641 "nvme_io_md": false, 00:25:49.641 "write_zeroes": true, 00:25:49.641 "zcopy": true, 00:25:49.641 "get_zone_info": false, 00:25:49.641 "zone_management": false, 00:25:49.641 "zone_append": false, 00:25:49.641 "compare": false, 00:25:49.641 "compare_and_write": false, 00:25:49.641 "abort": true, 00:25:49.641 "seek_hole": false, 00:25:49.641 "seek_data": false, 00:25:49.641 "copy": true, 00:25:49.641 "nvme_iov_md": false 00:25:49.641 }, 00:25:49.641 "memory_domains": [ 00:25:49.641 { 00:25:49.641 "dma_device_id": "system", 00:25:49.641 "dma_device_type": 1 00:25:49.641 }, 00:25:49.641 { 00:25:49.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.641 "dma_device_type": 2 00:25:49.641 } 00:25:49.641 ], 00:25:49.641 "driver_specific": {} 00:25:49.641 } 00:25:49.641 ] 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.641 BaseBdev3 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.641 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.641 [ 00:25:49.641 { 00:25:49.641 "name": "BaseBdev3", 00:25:49.641 "aliases": [ 00:25:49.641 "8aae60de-3a20-4c99-8dfd-2a371b8075d0" 00:25:49.641 ], 00:25:49.641 "product_name": "Malloc disk", 00:25:49.641 "block_size": 512, 00:25:49.641 "num_blocks": 65536, 00:25:49.641 "uuid": "8aae60de-3a20-4c99-8dfd-2a371b8075d0", 00:25:49.641 "assigned_rate_limits": { 00:25:49.641 "rw_ios_per_sec": 0, 00:25:49.641 "rw_mbytes_per_sec": 0, 00:25:49.641 "r_mbytes_per_sec": 0, 00:25:49.641 "w_mbytes_per_sec": 0 00:25:49.641 }, 00:25:49.641 "claimed": false, 00:25:49.641 "zoned": false, 00:25:49.641 "supported_io_types": { 00:25:49.641 "read": true, 00:25:49.641 "write": true, 00:25:49.641 "unmap": true, 00:25:49.641 "flush": true, 00:25:49.641 "reset": true, 00:25:49.641 "nvme_admin": false, 00:25:49.641 "nvme_io": false, 00:25:49.641 "nvme_io_md": false, 00:25:49.641 "write_zeroes": true, 00:25:49.641 "zcopy": true, 00:25:49.641 "get_zone_info": false, 00:25:49.641 "zone_management": false, 00:25:49.641 "zone_append": false, 00:25:49.641 "compare": false, 00:25:49.641 "compare_and_write": false, 00:25:49.641 "abort": true, 00:25:49.641 "seek_hole": false, 00:25:49.641 "seek_data": false, 00:25:49.641 "copy": true, 00:25:49.641 "nvme_iov_md": false 00:25:49.641 }, 00:25:49.641 "memory_domains": [ 00:25:49.641 { 00:25:49.641 "dma_device_id": "system", 00:25:49.642 "dma_device_type": 1 00:25:49.642 }, 00:25:49.642 { 00:25:49.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.642 "dma_device_type": 2 00:25:49.642 } 00:25:49.642 ], 00:25:49.642 "driver_specific": {} 00:25:49.642 } 00:25:49.642 ] 00:25:49.642 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.642 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:49.642 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:49.642 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:49.642 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:25:49.642 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.642 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.941 BaseBdev4 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.942 [ 00:25:49.942 { 00:25:49.942 "name": "BaseBdev4", 00:25:49.942 "aliases": [ 00:25:49.942 "46efeba7-a866-42f1-8413-382f011a8443" 00:25:49.942 ], 00:25:49.942 "product_name": "Malloc disk", 00:25:49.942 "block_size": 512, 00:25:49.942 "num_blocks": 65536, 00:25:49.942 "uuid": "46efeba7-a866-42f1-8413-382f011a8443", 00:25:49.942 "assigned_rate_limits": { 00:25:49.942 "rw_ios_per_sec": 0, 00:25:49.942 "rw_mbytes_per_sec": 0, 00:25:49.942 "r_mbytes_per_sec": 0, 00:25:49.942 "w_mbytes_per_sec": 0 00:25:49.942 }, 00:25:49.942 "claimed": false, 00:25:49.942 "zoned": false, 00:25:49.942 "supported_io_types": { 00:25:49.942 "read": true, 00:25:49.942 "write": true, 00:25:49.942 "unmap": true, 00:25:49.942 "flush": true, 00:25:49.942 "reset": true, 00:25:49.942 "nvme_admin": false, 00:25:49.942 "nvme_io": false, 00:25:49.942 "nvme_io_md": false, 00:25:49.942 "write_zeroes": true, 00:25:49.942 "zcopy": true, 00:25:49.942 "get_zone_info": false, 00:25:49.942 "zone_management": false, 00:25:49.942 "zone_append": false, 00:25:49.942 "compare": false, 00:25:49.942 "compare_and_write": false, 00:25:49.942 "abort": true, 00:25:49.942 "seek_hole": false, 00:25:49.942 "seek_data": false, 00:25:49.942 "copy": true, 00:25:49.942 "nvme_iov_md": false 00:25:49.942 }, 00:25:49.942 "memory_domains": [ 00:25:49.942 { 00:25:49.942 "dma_device_id": "system", 00:25:49.942 "dma_device_type": 1 00:25:49.942 }, 00:25:49.942 { 00:25:49.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.942 "dma_device_type": 2 00:25:49.942 } 00:25:49.942 ], 00:25:49.942 "driver_specific": {} 00:25:49.942 } 00:25:49.942 ] 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.942 [2024-11-20 13:45:52.611229] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:49.942 [2024-11-20 13:45:52.611313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:49.942 [2024-11-20 13:45:52.611342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:49.942 [2024-11-20 13:45:52.613834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:49.942 [2024-11-20 13:45:52.613963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:49.942 "name": "Existed_Raid", 00:25:49.942 "uuid": "03df04a6-93b1-4a5d-a146-d3260f50ab9f", 00:25:49.942 "strip_size_kb": 0, 00:25:49.942 "state": "configuring", 00:25:49.942 "raid_level": "raid1", 00:25:49.942 "superblock": true, 00:25:49.942 "num_base_bdevs": 4, 00:25:49.942 "num_base_bdevs_discovered": 3, 00:25:49.942 "num_base_bdevs_operational": 4, 00:25:49.942 "base_bdevs_list": [ 00:25:49.942 { 00:25:49.942 "name": "BaseBdev1", 00:25:49.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.942 "is_configured": false, 00:25:49.942 "data_offset": 0, 00:25:49.942 "data_size": 0 00:25:49.942 }, 00:25:49.942 { 00:25:49.942 "name": "BaseBdev2", 00:25:49.942 "uuid": "8378e3b9-a239-44a5-b134-aa77c4b7cf4b", 00:25:49.942 "is_configured": true, 00:25:49.942 "data_offset": 2048, 00:25:49.942 "data_size": 63488 00:25:49.942 }, 00:25:49.942 { 00:25:49.942 "name": "BaseBdev3", 00:25:49.942 "uuid": "8aae60de-3a20-4c99-8dfd-2a371b8075d0", 00:25:49.942 "is_configured": true, 00:25:49.942 "data_offset": 2048, 00:25:49.942 "data_size": 63488 00:25:49.942 }, 00:25:49.942 { 00:25:49.942 "name": "BaseBdev4", 00:25:49.942 "uuid": "46efeba7-a866-42f1-8413-382f011a8443", 00:25:49.942 "is_configured": true, 00:25:49.942 "data_offset": 2048, 00:25:49.942 "data_size": 63488 00:25:49.942 } 00:25:49.942 ] 00:25:49.942 }' 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:49.942 13:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.534 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:25:50.534 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.534 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.535 [2024-11-20 13:45:53.155480] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:50.535 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.535 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:50.535 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:50.535 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:50.535 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:50.535 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:50.535 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:50.535 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:50.535 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:50.535 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:50.535 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:50.535 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:50.535 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.535 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:50.535 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.535 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.535 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:50.535 "name": "Existed_Raid", 00:25:50.535 "uuid": "03df04a6-93b1-4a5d-a146-d3260f50ab9f", 00:25:50.535 "strip_size_kb": 0, 00:25:50.535 "state": "configuring", 00:25:50.535 "raid_level": "raid1", 00:25:50.535 "superblock": true, 00:25:50.535 "num_base_bdevs": 4, 00:25:50.535 "num_base_bdevs_discovered": 2, 00:25:50.535 "num_base_bdevs_operational": 4, 00:25:50.535 "base_bdevs_list": [ 00:25:50.535 { 00:25:50.535 "name": "BaseBdev1", 00:25:50.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.535 "is_configured": false, 00:25:50.535 "data_offset": 0, 00:25:50.535 "data_size": 0 00:25:50.535 }, 00:25:50.535 { 00:25:50.535 "name": null, 00:25:50.535 "uuid": "8378e3b9-a239-44a5-b134-aa77c4b7cf4b", 00:25:50.535 "is_configured": false, 00:25:50.535 "data_offset": 0, 00:25:50.535 "data_size": 63488 00:25:50.535 }, 00:25:50.535 { 00:25:50.535 "name": "BaseBdev3", 00:25:50.535 "uuid": "8aae60de-3a20-4c99-8dfd-2a371b8075d0", 00:25:50.535 "is_configured": true, 00:25:50.535 "data_offset": 2048, 00:25:50.535 "data_size": 63488 00:25:50.535 }, 00:25:50.535 { 00:25:50.535 "name": "BaseBdev4", 00:25:50.535 "uuid": "46efeba7-a866-42f1-8413-382f011a8443", 00:25:50.535 "is_configured": true, 00:25:50.535 "data_offset": 2048, 00:25:50.535 "data_size": 63488 00:25:50.535 } 00:25:50.535 ] 00:25:50.535 }' 00:25:50.535 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:50.535 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.794 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:50.794 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:50.794 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.794 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.053 [2024-11-20 13:45:53.787445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:51.053 BaseBdev1 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.053 [ 00:25:51.053 { 00:25:51.053 "name": "BaseBdev1", 00:25:51.053 "aliases": [ 00:25:51.053 "21a47209-ea5a-4160-9371-b25ad35a1abf" 00:25:51.053 ], 00:25:51.053 "product_name": "Malloc disk", 00:25:51.053 "block_size": 512, 00:25:51.053 "num_blocks": 65536, 00:25:51.053 "uuid": "21a47209-ea5a-4160-9371-b25ad35a1abf", 00:25:51.053 "assigned_rate_limits": { 00:25:51.053 "rw_ios_per_sec": 0, 00:25:51.053 "rw_mbytes_per_sec": 0, 00:25:51.053 "r_mbytes_per_sec": 0, 00:25:51.053 "w_mbytes_per_sec": 0 00:25:51.053 }, 00:25:51.053 "claimed": true, 00:25:51.053 "claim_type": "exclusive_write", 00:25:51.053 "zoned": false, 00:25:51.053 "supported_io_types": { 00:25:51.053 "read": true, 00:25:51.053 "write": true, 00:25:51.053 "unmap": true, 00:25:51.053 "flush": true, 00:25:51.053 "reset": true, 00:25:51.053 "nvme_admin": false, 00:25:51.053 "nvme_io": false, 00:25:51.053 "nvme_io_md": false, 00:25:51.053 "write_zeroes": true, 00:25:51.053 "zcopy": true, 00:25:51.053 "get_zone_info": false, 00:25:51.053 "zone_management": false, 00:25:51.053 "zone_append": false, 00:25:51.053 "compare": false, 00:25:51.053 "compare_and_write": false, 00:25:51.053 "abort": true, 00:25:51.053 "seek_hole": false, 00:25:51.053 "seek_data": false, 00:25:51.053 "copy": true, 00:25:51.053 "nvme_iov_md": false 00:25:51.053 }, 00:25:51.053 "memory_domains": [ 00:25:51.053 { 00:25:51.053 "dma_device_id": "system", 00:25:51.053 "dma_device_type": 1 00:25:51.053 }, 00:25:51.053 { 00:25:51.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:51.053 "dma_device_type": 2 00:25:51.053 } 00:25:51.053 ], 00:25:51.053 "driver_specific": {} 00:25:51.053 } 00:25:51.053 ] 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.053 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:51.053 "name": "Existed_Raid", 00:25:51.053 "uuid": "03df04a6-93b1-4a5d-a146-d3260f50ab9f", 00:25:51.053 "strip_size_kb": 0, 00:25:51.053 "state": "configuring", 00:25:51.053 "raid_level": "raid1", 00:25:51.053 "superblock": true, 00:25:51.053 "num_base_bdevs": 4, 00:25:51.053 "num_base_bdevs_discovered": 3, 00:25:51.053 "num_base_bdevs_operational": 4, 00:25:51.053 "base_bdevs_list": [ 00:25:51.053 { 00:25:51.053 "name": "BaseBdev1", 00:25:51.053 "uuid": "21a47209-ea5a-4160-9371-b25ad35a1abf", 00:25:51.053 "is_configured": true, 00:25:51.053 "data_offset": 2048, 00:25:51.053 "data_size": 63488 00:25:51.053 }, 00:25:51.053 { 00:25:51.053 "name": null, 00:25:51.053 "uuid": "8378e3b9-a239-44a5-b134-aa77c4b7cf4b", 00:25:51.053 "is_configured": false, 00:25:51.053 "data_offset": 0, 00:25:51.053 "data_size": 63488 00:25:51.053 }, 00:25:51.053 { 00:25:51.053 "name": "BaseBdev3", 00:25:51.053 "uuid": "8aae60de-3a20-4c99-8dfd-2a371b8075d0", 00:25:51.053 "is_configured": true, 00:25:51.053 "data_offset": 2048, 00:25:51.053 "data_size": 63488 00:25:51.053 }, 00:25:51.053 { 00:25:51.053 "name": "BaseBdev4", 00:25:51.053 "uuid": "46efeba7-a866-42f1-8413-382f011a8443", 00:25:51.053 "is_configured": true, 00:25:51.053 "data_offset": 2048, 00:25:51.054 "data_size": 63488 00:25:51.054 } 00:25:51.054 ] 00:25:51.054 }' 00:25:51.054 13:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:51.054 13:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.622 [2024-11-20 13:45:54.415720] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.622 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:51.622 "name": "Existed_Raid", 00:25:51.622 "uuid": "03df04a6-93b1-4a5d-a146-d3260f50ab9f", 00:25:51.622 "strip_size_kb": 0, 00:25:51.622 "state": "configuring", 00:25:51.622 "raid_level": "raid1", 00:25:51.622 "superblock": true, 00:25:51.622 "num_base_bdevs": 4, 00:25:51.622 "num_base_bdevs_discovered": 2, 00:25:51.622 "num_base_bdevs_operational": 4, 00:25:51.622 "base_bdevs_list": [ 00:25:51.622 { 00:25:51.622 "name": "BaseBdev1", 00:25:51.622 "uuid": "21a47209-ea5a-4160-9371-b25ad35a1abf", 00:25:51.622 "is_configured": true, 00:25:51.622 "data_offset": 2048, 00:25:51.622 "data_size": 63488 00:25:51.622 }, 00:25:51.622 { 00:25:51.622 "name": null, 00:25:51.622 "uuid": "8378e3b9-a239-44a5-b134-aa77c4b7cf4b", 00:25:51.622 "is_configured": false, 00:25:51.622 "data_offset": 0, 00:25:51.622 "data_size": 63488 00:25:51.622 }, 00:25:51.622 { 00:25:51.622 "name": null, 00:25:51.622 "uuid": "8aae60de-3a20-4c99-8dfd-2a371b8075d0", 00:25:51.622 "is_configured": false, 00:25:51.622 "data_offset": 0, 00:25:51.622 "data_size": 63488 00:25:51.622 }, 00:25:51.622 { 00:25:51.622 "name": "BaseBdev4", 00:25:51.623 "uuid": "46efeba7-a866-42f1-8413-382f011a8443", 00:25:51.623 "is_configured": true, 00:25:51.623 "data_offset": 2048, 00:25:51.623 "data_size": 63488 00:25:51.623 } 00:25:51.623 ] 00:25:51.623 }' 00:25:51.623 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:51.623 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.191 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:52.191 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:52.191 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.191 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.191 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.191 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:25:52.191 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:52.191 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.191 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.191 [2024-11-20 13:45:54.991902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:52.191 13:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.191 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:52.191 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:52.191 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:52.191 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:52.191 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:52.191 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:52.191 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:52.191 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:52.191 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:52.191 13:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:52.191 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:52.191 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.191 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:52.191 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.191 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.191 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:52.191 "name": "Existed_Raid", 00:25:52.191 "uuid": "03df04a6-93b1-4a5d-a146-d3260f50ab9f", 00:25:52.191 "strip_size_kb": 0, 00:25:52.191 "state": "configuring", 00:25:52.191 "raid_level": "raid1", 00:25:52.191 "superblock": true, 00:25:52.191 "num_base_bdevs": 4, 00:25:52.191 "num_base_bdevs_discovered": 3, 00:25:52.191 "num_base_bdevs_operational": 4, 00:25:52.191 "base_bdevs_list": [ 00:25:52.191 { 00:25:52.192 "name": "BaseBdev1", 00:25:52.192 "uuid": "21a47209-ea5a-4160-9371-b25ad35a1abf", 00:25:52.192 "is_configured": true, 00:25:52.192 "data_offset": 2048, 00:25:52.192 "data_size": 63488 00:25:52.192 }, 00:25:52.192 { 00:25:52.192 "name": null, 00:25:52.192 "uuid": "8378e3b9-a239-44a5-b134-aa77c4b7cf4b", 00:25:52.192 "is_configured": false, 00:25:52.192 "data_offset": 0, 00:25:52.192 "data_size": 63488 00:25:52.192 }, 00:25:52.192 { 00:25:52.192 "name": "BaseBdev3", 00:25:52.192 "uuid": "8aae60de-3a20-4c99-8dfd-2a371b8075d0", 00:25:52.192 "is_configured": true, 00:25:52.192 "data_offset": 2048, 00:25:52.192 "data_size": 63488 00:25:52.192 }, 00:25:52.192 { 00:25:52.192 "name": "BaseBdev4", 00:25:52.192 "uuid": "46efeba7-a866-42f1-8413-382f011a8443", 00:25:52.192 "is_configured": true, 00:25:52.192 "data_offset": 2048, 00:25:52.192 "data_size": 63488 00:25:52.192 } 00:25:52.192 ] 00:25:52.192 }' 00:25:52.192 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:52.192 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.760 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:52.760 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:52.760 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.760 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.760 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.760 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:25:52.760 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:25:52.760 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.760 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:52.760 [2024-11-20 13:45:55.580204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:52.760 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.760 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:52.760 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:52.760 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:52.760 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:52.760 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:52.760 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:52.760 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:52.760 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:52.760 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:52.760 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:53.019 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.019 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:53.019 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.019 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.019 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.019 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:53.019 "name": "Existed_Raid", 00:25:53.019 "uuid": "03df04a6-93b1-4a5d-a146-d3260f50ab9f", 00:25:53.019 "strip_size_kb": 0, 00:25:53.019 "state": "configuring", 00:25:53.019 "raid_level": "raid1", 00:25:53.019 "superblock": true, 00:25:53.019 "num_base_bdevs": 4, 00:25:53.019 "num_base_bdevs_discovered": 2, 00:25:53.019 "num_base_bdevs_operational": 4, 00:25:53.019 "base_bdevs_list": [ 00:25:53.019 { 00:25:53.019 "name": null, 00:25:53.019 "uuid": "21a47209-ea5a-4160-9371-b25ad35a1abf", 00:25:53.019 "is_configured": false, 00:25:53.019 "data_offset": 0, 00:25:53.019 "data_size": 63488 00:25:53.019 }, 00:25:53.019 { 00:25:53.019 "name": null, 00:25:53.019 "uuid": "8378e3b9-a239-44a5-b134-aa77c4b7cf4b", 00:25:53.019 "is_configured": false, 00:25:53.019 "data_offset": 0, 00:25:53.019 "data_size": 63488 00:25:53.019 }, 00:25:53.019 { 00:25:53.019 "name": "BaseBdev3", 00:25:53.019 "uuid": "8aae60de-3a20-4c99-8dfd-2a371b8075d0", 00:25:53.019 "is_configured": true, 00:25:53.019 "data_offset": 2048, 00:25:53.019 "data_size": 63488 00:25:53.019 }, 00:25:53.019 { 00:25:53.019 "name": "BaseBdev4", 00:25:53.019 "uuid": "46efeba7-a866-42f1-8413-382f011a8443", 00:25:53.019 "is_configured": true, 00:25:53.019 "data_offset": 2048, 00:25:53.019 "data_size": 63488 00:25:53.019 } 00:25:53.019 ] 00:25:53.019 }' 00:25:53.019 13:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:53.019 13:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.587 [2024-11-20 13:45:56.264103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:53.587 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.588 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:53.588 "name": "Existed_Raid", 00:25:53.588 "uuid": "03df04a6-93b1-4a5d-a146-d3260f50ab9f", 00:25:53.588 "strip_size_kb": 0, 00:25:53.588 "state": "configuring", 00:25:53.588 "raid_level": "raid1", 00:25:53.588 "superblock": true, 00:25:53.588 "num_base_bdevs": 4, 00:25:53.588 "num_base_bdevs_discovered": 3, 00:25:53.588 "num_base_bdevs_operational": 4, 00:25:53.588 "base_bdevs_list": [ 00:25:53.588 { 00:25:53.588 "name": null, 00:25:53.588 "uuid": "21a47209-ea5a-4160-9371-b25ad35a1abf", 00:25:53.588 "is_configured": false, 00:25:53.588 "data_offset": 0, 00:25:53.588 "data_size": 63488 00:25:53.588 }, 00:25:53.588 { 00:25:53.588 "name": "BaseBdev2", 00:25:53.588 "uuid": "8378e3b9-a239-44a5-b134-aa77c4b7cf4b", 00:25:53.588 "is_configured": true, 00:25:53.588 "data_offset": 2048, 00:25:53.588 "data_size": 63488 00:25:53.588 }, 00:25:53.588 { 00:25:53.588 "name": "BaseBdev3", 00:25:53.588 "uuid": "8aae60de-3a20-4c99-8dfd-2a371b8075d0", 00:25:53.588 "is_configured": true, 00:25:53.588 "data_offset": 2048, 00:25:53.588 "data_size": 63488 00:25:53.588 }, 00:25:53.588 { 00:25:53.588 "name": "BaseBdev4", 00:25:53.588 "uuid": "46efeba7-a866-42f1-8413-382f011a8443", 00:25:53.588 "is_configured": true, 00:25:53.588 "data_offset": 2048, 00:25:53.588 "data_size": 63488 00:25:53.588 } 00:25:53.588 ] 00:25:53.588 }' 00:25:53.588 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:53.588 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 21a47209-ea5a-4160-9371-b25ad35a1abf 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.157 [2024-11-20 13:45:56.960461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:54.157 NewBaseBdev 00:25:54.157 [2024-11-20 13:45:56.961035] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:54.157 [2024-11-20 13:45:56.961067] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:54.157 [2024-11-20 13:45:56.961411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:25:54.157 [2024-11-20 13:45:56.961616] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:54.157 [2024-11-20 13:45:56.961633] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:25:54.157 [2024-11-20 13:45:56.961799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.157 [ 00:25:54.157 { 00:25:54.157 "name": "NewBaseBdev", 00:25:54.157 "aliases": [ 00:25:54.157 "21a47209-ea5a-4160-9371-b25ad35a1abf" 00:25:54.157 ], 00:25:54.157 "product_name": "Malloc disk", 00:25:54.157 "block_size": 512, 00:25:54.157 "num_blocks": 65536, 00:25:54.157 "uuid": "21a47209-ea5a-4160-9371-b25ad35a1abf", 00:25:54.157 "assigned_rate_limits": { 00:25:54.157 "rw_ios_per_sec": 0, 00:25:54.157 "rw_mbytes_per_sec": 0, 00:25:54.157 "r_mbytes_per_sec": 0, 00:25:54.157 "w_mbytes_per_sec": 0 00:25:54.157 }, 00:25:54.157 "claimed": true, 00:25:54.157 "claim_type": "exclusive_write", 00:25:54.157 "zoned": false, 00:25:54.157 "supported_io_types": { 00:25:54.157 "read": true, 00:25:54.157 "write": true, 00:25:54.157 "unmap": true, 00:25:54.157 "flush": true, 00:25:54.157 "reset": true, 00:25:54.157 "nvme_admin": false, 00:25:54.157 "nvme_io": false, 00:25:54.157 "nvme_io_md": false, 00:25:54.157 "write_zeroes": true, 00:25:54.157 "zcopy": true, 00:25:54.157 "get_zone_info": false, 00:25:54.157 "zone_management": false, 00:25:54.157 "zone_append": false, 00:25:54.157 "compare": false, 00:25:54.157 "compare_and_write": false, 00:25:54.157 "abort": true, 00:25:54.157 "seek_hole": false, 00:25:54.157 "seek_data": false, 00:25:54.157 "copy": true, 00:25:54.157 "nvme_iov_md": false 00:25:54.157 }, 00:25:54.157 "memory_domains": [ 00:25:54.157 { 00:25:54.157 "dma_device_id": "system", 00:25:54.157 "dma_device_type": 1 00:25:54.157 }, 00:25:54.157 { 00:25:54.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:54.157 "dma_device_type": 2 00:25:54.157 } 00:25:54.157 ], 00:25:54.157 "driver_specific": {} 00:25:54.157 } 00:25:54.157 ] 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:54.157 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:54.158 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:54.158 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:54.158 13:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:54.158 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.158 13:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.158 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.158 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:54.158 "name": "Existed_Raid", 00:25:54.158 "uuid": "03df04a6-93b1-4a5d-a146-d3260f50ab9f", 00:25:54.158 "strip_size_kb": 0, 00:25:54.158 "state": "online", 00:25:54.158 "raid_level": "raid1", 00:25:54.158 "superblock": true, 00:25:54.158 "num_base_bdevs": 4, 00:25:54.158 "num_base_bdevs_discovered": 4, 00:25:54.158 "num_base_bdevs_operational": 4, 00:25:54.158 "base_bdevs_list": [ 00:25:54.158 { 00:25:54.158 "name": "NewBaseBdev", 00:25:54.158 "uuid": "21a47209-ea5a-4160-9371-b25ad35a1abf", 00:25:54.158 "is_configured": true, 00:25:54.158 "data_offset": 2048, 00:25:54.158 "data_size": 63488 00:25:54.158 }, 00:25:54.158 { 00:25:54.158 "name": "BaseBdev2", 00:25:54.158 "uuid": "8378e3b9-a239-44a5-b134-aa77c4b7cf4b", 00:25:54.158 "is_configured": true, 00:25:54.158 "data_offset": 2048, 00:25:54.158 "data_size": 63488 00:25:54.158 }, 00:25:54.158 { 00:25:54.158 "name": "BaseBdev3", 00:25:54.158 "uuid": "8aae60de-3a20-4c99-8dfd-2a371b8075d0", 00:25:54.158 "is_configured": true, 00:25:54.158 "data_offset": 2048, 00:25:54.158 "data_size": 63488 00:25:54.158 }, 00:25:54.158 { 00:25:54.158 "name": "BaseBdev4", 00:25:54.158 "uuid": "46efeba7-a866-42f1-8413-382f011a8443", 00:25:54.158 "is_configured": true, 00:25:54.158 "data_offset": 2048, 00:25:54.158 "data_size": 63488 00:25:54.158 } 00:25:54.158 ] 00:25:54.158 }' 00:25:54.158 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:54.158 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.727 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:25:54.727 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:25:54.727 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:54.727 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:54.727 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:25:54.727 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:54.727 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:25:54.727 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.727 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.727 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:54.727 [2024-11-20 13:45:57.533224] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:54.727 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.727 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:54.727 "name": "Existed_Raid", 00:25:54.727 "aliases": [ 00:25:54.727 "03df04a6-93b1-4a5d-a146-d3260f50ab9f" 00:25:54.727 ], 00:25:54.727 "product_name": "Raid Volume", 00:25:54.727 "block_size": 512, 00:25:54.727 "num_blocks": 63488, 00:25:54.727 "uuid": "03df04a6-93b1-4a5d-a146-d3260f50ab9f", 00:25:54.727 "assigned_rate_limits": { 00:25:54.727 "rw_ios_per_sec": 0, 00:25:54.727 "rw_mbytes_per_sec": 0, 00:25:54.727 "r_mbytes_per_sec": 0, 00:25:54.727 "w_mbytes_per_sec": 0 00:25:54.727 }, 00:25:54.727 "claimed": false, 00:25:54.727 "zoned": false, 00:25:54.727 "supported_io_types": { 00:25:54.727 "read": true, 00:25:54.727 "write": true, 00:25:54.727 "unmap": false, 00:25:54.727 "flush": false, 00:25:54.727 "reset": true, 00:25:54.727 "nvme_admin": false, 00:25:54.727 "nvme_io": false, 00:25:54.727 "nvme_io_md": false, 00:25:54.727 "write_zeroes": true, 00:25:54.727 "zcopy": false, 00:25:54.727 "get_zone_info": false, 00:25:54.727 "zone_management": false, 00:25:54.727 "zone_append": false, 00:25:54.727 "compare": false, 00:25:54.727 "compare_and_write": false, 00:25:54.727 "abort": false, 00:25:54.727 "seek_hole": false, 00:25:54.727 "seek_data": false, 00:25:54.727 "copy": false, 00:25:54.727 "nvme_iov_md": false 00:25:54.727 }, 00:25:54.727 "memory_domains": [ 00:25:54.727 { 00:25:54.727 "dma_device_id": "system", 00:25:54.727 "dma_device_type": 1 00:25:54.727 }, 00:25:54.727 { 00:25:54.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:54.727 "dma_device_type": 2 00:25:54.727 }, 00:25:54.727 { 00:25:54.727 "dma_device_id": "system", 00:25:54.727 "dma_device_type": 1 00:25:54.727 }, 00:25:54.727 { 00:25:54.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:54.727 "dma_device_type": 2 00:25:54.727 }, 00:25:54.727 { 00:25:54.727 "dma_device_id": "system", 00:25:54.727 "dma_device_type": 1 00:25:54.727 }, 00:25:54.727 { 00:25:54.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:54.727 "dma_device_type": 2 00:25:54.727 }, 00:25:54.727 { 00:25:54.727 "dma_device_id": "system", 00:25:54.727 "dma_device_type": 1 00:25:54.727 }, 00:25:54.727 { 00:25:54.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:54.727 "dma_device_type": 2 00:25:54.727 } 00:25:54.727 ], 00:25:54.727 "driver_specific": { 00:25:54.727 "raid": { 00:25:54.727 "uuid": "03df04a6-93b1-4a5d-a146-d3260f50ab9f", 00:25:54.727 "strip_size_kb": 0, 00:25:54.727 "state": "online", 00:25:54.727 "raid_level": "raid1", 00:25:54.727 "superblock": true, 00:25:54.727 "num_base_bdevs": 4, 00:25:54.727 "num_base_bdevs_discovered": 4, 00:25:54.727 "num_base_bdevs_operational": 4, 00:25:54.727 "base_bdevs_list": [ 00:25:54.727 { 00:25:54.727 "name": "NewBaseBdev", 00:25:54.727 "uuid": "21a47209-ea5a-4160-9371-b25ad35a1abf", 00:25:54.727 "is_configured": true, 00:25:54.727 "data_offset": 2048, 00:25:54.727 "data_size": 63488 00:25:54.727 }, 00:25:54.727 { 00:25:54.727 "name": "BaseBdev2", 00:25:54.727 "uuid": "8378e3b9-a239-44a5-b134-aa77c4b7cf4b", 00:25:54.727 "is_configured": true, 00:25:54.727 "data_offset": 2048, 00:25:54.727 "data_size": 63488 00:25:54.727 }, 00:25:54.727 { 00:25:54.727 "name": "BaseBdev3", 00:25:54.727 "uuid": "8aae60de-3a20-4c99-8dfd-2a371b8075d0", 00:25:54.727 "is_configured": true, 00:25:54.727 "data_offset": 2048, 00:25:54.728 "data_size": 63488 00:25:54.728 }, 00:25:54.728 { 00:25:54.728 "name": "BaseBdev4", 00:25:54.728 "uuid": "46efeba7-a866-42f1-8413-382f011a8443", 00:25:54.728 "is_configured": true, 00:25:54.728 "data_offset": 2048, 00:25:54.728 "data_size": 63488 00:25:54.728 } 00:25:54.728 ] 00:25:54.728 } 00:25:54.728 } 00:25:54.728 }' 00:25:54.728 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:54.728 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:25:54.728 BaseBdev2 00:25:54.728 BaseBdev3 00:25:54.728 BaseBdev4' 00:25:54.728 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:54.987 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:54.987 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:54.987 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:25:54.987 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.987 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.988 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.268 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:55.268 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:55.268 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:25:55.268 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.268 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:55.268 [2024-11-20 13:45:57.908831] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:55.268 [2024-11-20 13:45:57.909002] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:55.269 [2024-11-20 13:45:57.909210] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:55.269 [2024-11-20 13:45:57.909770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:55.269 [2024-11-20 13:45:57.909930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:25:55.269 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.269 13:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74207 00:25:55.269 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74207 ']' 00:25:55.269 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74207 00:25:55.269 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:25:55.269 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:55.269 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74207 00:25:55.269 killing process with pid 74207 00:25:55.269 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:55.269 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:55.269 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74207' 00:25:55.269 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74207 00:25:55.269 [2024-11-20 13:45:57.953673] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:55.269 13:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74207 00:25:55.551 [2024-11-20 13:45:58.329076] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:56.925 ************************************ 00:25:56.925 END TEST raid_state_function_test_sb 00:25:56.925 ************************************ 00:25:56.925 13:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:25:56.925 00:25:56.925 real 0m13.245s 00:25:56.925 user 0m21.984s 00:25:56.925 sys 0m1.842s 00:25:56.925 13:45:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:56.925 13:45:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.925 13:45:59 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:25:56.925 13:45:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:56.925 13:45:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:56.925 13:45:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:56.925 ************************************ 00:25:56.925 START TEST raid_superblock_test 00:25:56.925 ************************************ 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74894 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74894 00:25:56.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74894 ']' 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:56.925 13:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.925 [2024-11-20 13:45:59.599215] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:25:56.925 [2024-11-20 13:45:59.599409] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74894 ] 00:25:56.925 [2024-11-20 13:45:59.777138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.184 [2024-11-20 13:45:59.910915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.443 [2024-11-20 13:46:00.120179] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:57.443 [2024-11-20 13:46:00.120255] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:58.011 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.011 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:25:58.011 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:25:58.011 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:58.011 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:25:58.011 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:25:58.011 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:58.011 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:58.011 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:58.011 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:58.011 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:25:58.011 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.011 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.011 malloc1 00:25:58.011 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.011 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.012 [2024-11-20 13:46:00.692163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:58.012 [2024-11-20 13:46:00.692470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:58.012 [2024-11-20 13:46:00.692562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:58.012 [2024-11-20 13:46:00.692745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:58.012 [2024-11-20 13:46:00.695721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:58.012 [2024-11-20 13:46:00.695905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:58.012 pt1 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.012 malloc2 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.012 [2024-11-20 13:46:00.750790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:58.012 [2024-11-20 13:46:00.751014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:58.012 [2024-11-20 13:46:00.751108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:58.012 [2024-11-20 13:46:00.751340] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:58.012 [2024-11-20 13:46:00.754400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:58.012 [2024-11-20 13:46:00.754555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:58.012 pt2 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.012 malloc3 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.012 [2024-11-20 13:46:00.822939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:58.012 [2024-11-20 13:46:00.823145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:58.012 [2024-11-20 13:46:00.823242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:58.012 [2024-11-20 13:46:00.823357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:58.012 [2024-11-20 13:46:00.826370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:58.012 pt3 00:25:58.012 [2024-11-20 13:46:00.826555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.012 malloc4 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.012 [2024-11-20 13:46:00.873031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:58.012 [2024-11-20 13:46:00.873264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:58.012 [2024-11-20 13:46:00.873422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:25:58.012 [2024-11-20 13:46:00.873530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:58.012 [2024-11-20 13:46:00.876438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:58.012 [2024-11-20 13:46:00.876633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:58.012 pt4 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.012 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.012 [2024-11-20 13:46:00.885292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:58.012 [2024-11-20 13:46:00.887975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:58.013 [2024-11-20 13:46:00.888193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:58.013 [2024-11-20 13:46:00.888333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:58.013 [2024-11-20 13:46:00.888641] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:58.013 [2024-11-20 13:46:00.888768] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:58.013 [2024-11-20 13:46:00.889123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:58.013 [2024-11-20 13:46:00.889346] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:58.013 [2024-11-20 13:46:00.889370] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:25:58.013 [2024-11-20 13:46:00.889593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:58.013 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.013 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:58.013 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:58.013 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:25:58.013 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:58.013 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:58.013 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:58.013 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:58.013 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:58.013 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:58.013 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:58.013 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:58.013 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:58.013 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.013 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.013 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.271 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:58.271 "name": "raid_bdev1", 00:25:58.271 "uuid": "6bba6fdf-1bcf-42c9-bfb8-c642ec64a6b5", 00:25:58.271 "strip_size_kb": 0, 00:25:58.271 "state": "online", 00:25:58.271 "raid_level": "raid1", 00:25:58.271 "superblock": true, 00:25:58.271 "num_base_bdevs": 4, 00:25:58.271 "num_base_bdevs_discovered": 4, 00:25:58.271 "num_base_bdevs_operational": 4, 00:25:58.271 "base_bdevs_list": [ 00:25:58.271 { 00:25:58.271 "name": "pt1", 00:25:58.271 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:58.271 "is_configured": true, 00:25:58.271 "data_offset": 2048, 00:25:58.271 "data_size": 63488 00:25:58.271 }, 00:25:58.271 { 00:25:58.271 "name": "pt2", 00:25:58.271 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:58.271 "is_configured": true, 00:25:58.271 "data_offset": 2048, 00:25:58.271 "data_size": 63488 00:25:58.271 }, 00:25:58.271 { 00:25:58.271 "name": "pt3", 00:25:58.271 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:58.271 "is_configured": true, 00:25:58.271 "data_offset": 2048, 00:25:58.271 "data_size": 63488 00:25:58.271 }, 00:25:58.271 { 00:25:58.271 "name": "pt4", 00:25:58.271 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:58.271 "is_configured": true, 00:25:58.271 "data_offset": 2048, 00:25:58.271 "data_size": 63488 00:25:58.271 } 00:25:58.271 ] 00:25:58.271 }' 00:25:58.271 13:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:58.271 13:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.530 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:25:58.530 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:25:58.530 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:25:58.530 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:25:58.530 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:25:58.530 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:25:58.530 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:58.530 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.530 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:25:58.530 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.789 [2024-11-20 13:46:01.450160] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:58.789 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.789 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:58.789 "name": "raid_bdev1", 00:25:58.789 "aliases": [ 00:25:58.789 "6bba6fdf-1bcf-42c9-bfb8-c642ec64a6b5" 00:25:58.789 ], 00:25:58.789 "product_name": "Raid Volume", 00:25:58.789 "block_size": 512, 00:25:58.789 "num_blocks": 63488, 00:25:58.789 "uuid": "6bba6fdf-1bcf-42c9-bfb8-c642ec64a6b5", 00:25:58.789 "assigned_rate_limits": { 00:25:58.789 "rw_ios_per_sec": 0, 00:25:58.789 "rw_mbytes_per_sec": 0, 00:25:58.789 "r_mbytes_per_sec": 0, 00:25:58.789 "w_mbytes_per_sec": 0 00:25:58.789 }, 00:25:58.789 "claimed": false, 00:25:58.789 "zoned": false, 00:25:58.789 "supported_io_types": { 00:25:58.789 "read": true, 00:25:58.789 "write": true, 00:25:58.789 "unmap": false, 00:25:58.789 "flush": false, 00:25:58.789 "reset": true, 00:25:58.789 "nvme_admin": false, 00:25:58.789 "nvme_io": false, 00:25:58.789 "nvme_io_md": false, 00:25:58.789 "write_zeroes": true, 00:25:58.789 "zcopy": false, 00:25:58.789 "get_zone_info": false, 00:25:58.789 "zone_management": false, 00:25:58.789 "zone_append": false, 00:25:58.789 "compare": false, 00:25:58.789 "compare_and_write": false, 00:25:58.789 "abort": false, 00:25:58.789 "seek_hole": false, 00:25:58.789 "seek_data": false, 00:25:58.789 "copy": false, 00:25:58.789 "nvme_iov_md": false 00:25:58.789 }, 00:25:58.789 "memory_domains": [ 00:25:58.789 { 00:25:58.789 "dma_device_id": "system", 00:25:58.789 "dma_device_type": 1 00:25:58.789 }, 00:25:58.789 { 00:25:58.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.789 "dma_device_type": 2 00:25:58.789 }, 00:25:58.789 { 00:25:58.789 "dma_device_id": "system", 00:25:58.789 "dma_device_type": 1 00:25:58.789 }, 00:25:58.789 { 00:25:58.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.789 "dma_device_type": 2 00:25:58.789 }, 00:25:58.789 { 00:25:58.789 "dma_device_id": "system", 00:25:58.789 "dma_device_type": 1 00:25:58.789 }, 00:25:58.789 { 00:25:58.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.789 "dma_device_type": 2 00:25:58.789 }, 00:25:58.789 { 00:25:58.789 "dma_device_id": "system", 00:25:58.789 "dma_device_type": 1 00:25:58.789 }, 00:25:58.789 { 00:25:58.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.789 "dma_device_type": 2 00:25:58.789 } 00:25:58.789 ], 00:25:58.789 "driver_specific": { 00:25:58.789 "raid": { 00:25:58.789 "uuid": "6bba6fdf-1bcf-42c9-bfb8-c642ec64a6b5", 00:25:58.789 "strip_size_kb": 0, 00:25:58.789 "state": "online", 00:25:58.789 "raid_level": "raid1", 00:25:58.789 "superblock": true, 00:25:58.789 "num_base_bdevs": 4, 00:25:58.789 "num_base_bdevs_discovered": 4, 00:25:58.789 "num_base_bdevs_operational": 4, 00:25:58.789 "base_bdevs_list": [ 00:25:58.789 { 00:25:58.789 "name": "pt1", 00:25:58.789 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:58.789 "is_configured": true, 00:25:58.789 "data_offset": 2048, 00:25:58.789 "data_size": 63488 00:25:58.789 }, 00:25:58.789 { 00:25:58.789 "name": "pt2", 00:25:58.789 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:58.789 "is_configured": true, 00:25:58.789 "data_offset": 2048, 00:25:58.789 "data_size": 63488 00:25:58.789 }, 00:25:58.789 { 00:25:58.789 "name": "pt3", 00:25:58.789 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:58.790 "is_configured": true, 00:25:58.790 "data_offset": 2048, 00:25:58.790 "data_size": 63488 00:25:58.790 }, 00:25:58.790 { 00:25:58.790 "name": "pt4", 00:25:58.790 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:58.790 "is_configured": true, 00:25:58.790 "data_offset": 2048, 00:25:58.790 "data_size": 63488 00:25:58.790 } 00:25:58.790 ] 00:25:58.790 } 00:25:58.790 } 00:25:58.790 }' 00:25:58.790 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:58.790 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:25:58.790 pt2 00:25:58.790 pt3 00:25:58.790 pt4' 00:25:58.790 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:58.790 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:25:58.790 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:58.790 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:25:58.790 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.790 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.790 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:58.790 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.790 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:58.790 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:58.790 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:58.790 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:25:58.790 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:58.790 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.790 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.790 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:25:59.047 [2024-11-20 13:46:01.826252] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6bba6fdf-1bcf-42c9-bfb8-c642ec64a6b5 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6bba6fdf-1bcf-42c9-bfb8-c642ec64a6b5 ']' 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.047 [2024-11-20 13:46:01.873837] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:59.047 [2024-11-20 13:46:01.873909] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:59.047 [2024-11-20 13:46:01.874054] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:59.047 [2024-11-20 13:46:01.874192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:59.047 [2024-11-20 13:46:01.874240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:59.047 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:25:59.048 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.048 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.048 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.048 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:59.048 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:25:59.048 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.048 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.048 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.048 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:59.048 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:25:59.048 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.048 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.048 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.048 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:25:59.048 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:25:59.048 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.048 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.306 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.306 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:25:59.306 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.306 13:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.306 13:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:59.306 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.306 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:25:59.306 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:25:59.306 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:25:59.306 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:25:59.306 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:59.306 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.306 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:59.306 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.306 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:25:59.306 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.306 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.306 [2024-11-20 13:46:02.025899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:59.306 [2024-11-20 13:46:02.028478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:59.306 [2024-11-20 13:46:02.028597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:59.306 [2024-11-20 13:46:02.028668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:25:59.306 [2024-11-20 13:46:02.028739] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:59.307 [2024-11-20 13:46:02.028814] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:59.307 [2024-11-20 13:46:02.028846] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:25:59.307 [2024-11-20 13:46:02.028877] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:25:59.307 [2024-11-20 13:46:02.028921] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:59.307 [2024-11-20 13:46:02.028939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:25:59.307 request: 00:25:59.307 { 00:25:59.307 "name": "raid_bdev1", 00:25:59.307 "raid_level": "raid1", 00:25:59.307 "base_bdevs": [ 00:25:59.307 "malloc1", 00:25:59.307 "malloc2", 00:25:59.307 "malloc3", 00:25:59.307 "malloc4" 00:25:59.307 ], 00:25:59.307 "superblock": false, 00:25:59.307 "method": "bdev_raid_create", 00:25:59.307 "req_id": 1 00:25:59.307 } 00:25:59.307 Got JSON-RPC error response 00:25:59.307 response: 00:25:59.307 { 00:25:59.307 "code": -17, 00:25:59.307 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:59.307 } 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.307 [2024-11-20 13:46:02.093914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:59.307 [2024-11-20 13:46:02.093991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:59.307 [2024-11-20 13:46:02.094019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:59.307 [2024-11-20 13:46:02.094036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:59.307 [2024-11-20 13:46:02.096931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:59.307 [2024-11-20 13:46:02.096979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:59.307 [2024-11-20 13:46:02.097086] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:59.307 [2024-11-20 13:46:02.097163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:59.307 pt1 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:59.307 "name": "raid_bdev1", 00:25:59.307 "uuid": "6bba6fdf-1bcf-42c9-bfb8-c642ec64a6b5", 00:25:59.307 "strip_size_kb": 0, 00:25:59.307 "state": "configuring", 00:25:59.307 "raid_level": "raid1", 00:25:59.307 "superblock": true, 00:25:59.307 "num_base_bdevs": 4, 00:25:59.307 "num_base_bdevs_discovered": 1, 00:25:59.307 "num_base_bdevs_operational": 4, 00:25:59.307 "base_bdevs_list": [ 00:25:59.307 { 00:25:59.307 "name": "pt1", 00:25:59.307 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:59.307 "is_configured": true, 00:25:59.307 "data_offset": 2048, 00:25:59.307 "data_size": 63488 00:25:59.307 }, 00:25:59.307 { 00:25:59.307 "name": null, 00:25:59.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:59.307 "is_configured": false, 00:25:59.307 "data_offset": 2048, 00:25:59.307 "data_size": 63488 00:25:59.307 }, 00:25:59.307 { 00:25:59.307 "name": null, 00:25:59.307 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:59.307 "is_configured": false, 00:25:59.307 "data_offset": 2048, 00:25:59.307 "data_size": 63488 00:25:59.307 }, 00:25:59.307 { 00:25:59.307 "name": null, 00:25:59.307 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:59.307 "is_configured": false, 00:25:59.307 "data_offset": 2048, 00:25:59.307 "data_size": 63488 00:25:59.307 } 00:25:59.307 ] 00:25:59.307 }' 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:59.307 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.875 [2024-11-20 13:46:02.626113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:59.875 [2024-11-20 13:46:02.626222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:59.875 [2024-11-20 13:46:02.626264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:25:59.875 [2024-11-20 13:46:02.626297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:59.875 [2024-11-20 13:46:02.626909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:59.875 [2024-11-20 13:46:02.626957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:59.875 [2024-11-20 13:46:02.627064] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:59.875 [2024-11-20 13:46:02.627125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:59.875 pt2 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.875 [2024-11-20 13:46:02.634043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:25:59.875 "name": "raid_bdev1", 00:25:59.875 "uuid": "6bba6fdf-1bcf-42c9-bfb8-c642ec64a6b5", 00:25:59.875 "strip_size_kb": 0, 00:25:59.875 "state": "configuring", 00:25:59.875 "raid_level": "raid1", 00:25:59.875 "superblock": true, 00:25:59.875 "num_base_bdevs": 4, 00:25:59.875 "num_base_bdevs_discovered": 1, 00:25:59.875 "num_base_bdevs_operational": 4, 00:25:59.875 "base_bdevs_list": [ 00:25:59.875 { 00:25:59.875 "name": "pt1", 00:25:59.875 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:59.875 "is_configured": true, 00:25:59.875 "data_offset": 2048, 00:25:59.875 "data_size": 63488 00:25:59.875 }, 00:25:59.875 { 00:25:59.875 "name": null, 00:25:59.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:59.875 "is_configured": false, 00:25:59.875 "data_offset": 0, 00:25:59.875 "data_size": 63488 00:25:59.875 }, 00:25:59.875 { 00:25:59.875 "name": null, 00:25:59.875 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:59.875 "is_configured": false, 00:25:59.875 "data_offset": 2048, 00:25:59.875 "data_size": 63488 00:25:59.875 }, 00:25:59.875 { 00:25:59.875 "name": null, 00:25:59.875 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:59.875 "is_configured": false, 00:25:59.875 "data_offset": 2048, 00:25:59.875 "data_size": 63488 00:25:59.875 } 00:25:59.875 ] 00:25:59.875 }' 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:25:59.875 13:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.443 [2024-11-20 13:46:03.202279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:00.443 [2024-11-20 13:46:03.202358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:00.443 [2024-11-20 13:46:03.202389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:26:00.443 [2024-11-20 13:46:03.202405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:00.443 [2024-11-20 13:46:03.203000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:00.443 [2024-11-20 13:46:03.203035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:00.443 [2024-11-20 13:46:03.203142] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:00.443 [2024-11-20 13:46:03.203175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:00.443 pt2 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.443 [2024-11-20 13:46:03.210230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:00.443 [2024-11-20 13:46:03.210322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:00.443 [2024-11-20 13:46:03.210348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:26:00.443 [2024-11-20 13:46:03.210361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:00.443 [2024-11-20 13:46:03.210802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:00.443 [2024-11-20 13:46:03.210841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:00.443 [2024-11-20 13:46:03.210949] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:00.443 [2024-11-20 13:46:03.210978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:00.443 pt3 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.443 [2024-11-20 13:46:03.218213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:00.443 [2024-11-20 13:46:03.218261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:00.443 [2024-11-20 13:46:03.218286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:26:00.443 [2024-11-20 13:46:03.218300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:00.443 [2024-11-20 13:46:03.218749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:00.443 [2024-11-20 13:46:03.218789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:00.443 [2024-11-20 13:46:03.218870] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:00.443 [2024-11-20 13:46:03.218920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:00.443 [2024-11-20 13:46:03.219107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:00.443 [2024-11-20 13:46:03.219132] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:00.443 [2024-11-20 13:46:03.219453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:00.443 [2024-11-20 13:46:03.219661] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:00.443 [2024-11-20 13:46:03.219691] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:26:00.443 [2024-11-20 13:46:03.219850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:00.443 pt4 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.443 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:00.443 "name": "raid_bdev1", 00:26:00.443 "uuid": "6bba6fdf-1bcf-42c9-bfb8-c642ec64a6b5", 00:26:00.443 "strip_size_kb": 0, 00:26:00.444 "state": "online", 00:26:00.444 "raid_level": "raid1", 00:26:00.444 "superblock": true, 00:26:00.444 "num_base_bdevs": 4, 00:26:00.444 "num_base_bdevs_discovered": 4, 00:26:00.444 "num_base_bdevs_operational": 4, 00:26:00.444 "base_bdevs_list": [ 00:26:00.444 { 00:26:00.444 "name": "pt1", 00:26:00.444 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:00.444 "is_configured": true, 00:26:00.444 "data_offset": 2048, 00:26:00.444 "data_size": 63488 00:26:00.444 }, 00:26:00.444 { 00:26:00.444 "name": "pt2", 00:26:00.444 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:00.444 "is_configured": true, 00:26:00.444 "data_offset": 2048, 00:26:00.444 "data_size": 63488 00:26:00.444 }, 00:26:00.444 { 00:26:00.444 "name": "pt3", 00:26:00.444 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:00.444 "is_configured": true, 00:26:00.444 "data_offset": 2048, 00:26:00.444 "data_size": 63488 00:26:00.444 }, 00:26:00.444 { 00:26:00.444 "name": "pt4", 00:26:00.444 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:00.444 "is_configured": true, 00:26:00.444 "data_offset": 2048, 00:26:00.444 "data_size": 63488 00:26:00.444 } 00:26:00.444 ] 00:26:00.444 }' 00:26:00.444 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:00.444 13:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.010 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:26:01.010 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:01.010 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:01.010 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:01.010 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:01.010 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:01.010 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:01.010 13:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.010 13:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.010 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:01.010 [2024-11-20 13:46:03.766926] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:01.010 13:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.010 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:01.010 "name": "raid_bdev1", 00:26:01.010 "aliases": [ 00:26:01.010 "6bba6fdf-1bcf-42c9-bfb8-c642ec64a6b5" 00:26:01.010 ], 00:26:01.010 "product_name": "Raid Volume", 00:26:01.010 "block_size": 512, 00:26:01.010 "num_blocks": 63488, 00:26:01.010 "uuid": "6bba6fdf-1bcf-42c9-bfb8-c642ec64a6b5", 00:26:01.010 "assigned_rate_limits": { 00:26:01.010 "rw_ios_per_sec": 0, 00:26:01.010 "rw_mbytes_per_sec": 0, 00:26:01.010 "r_mbytes_per_sec": 0, 00:26:01.010 "w_mbytes_per_sec": 0 00:26:01.010 }, 00:26:01.010 "claimed": false, 00:26:01.010 "zoned": false, 00:26:01.010 "supported_io_types": { 00:26:01.010 "read": true, 00:26:01.010 "write": true, 00:26:01.010 "unmap": false, 00:26:01.010 "flush": false, 00:26:01.010 "reset": true, 00:26:01.010 "nvme_admin": false, 00:26:01.010 "nvme_io": false, 00:26:01.010 "nvme_io_md": false, 00:26:01.010 "write_zeroes": true, 00:26:01.010 "zcopy": false, 00:26:01.010 "get_zone_info": false, 00:26:01.010 "zone_management": false, 00:26:01.010 "zone_append": false, 00:26:01.010 "compare": false, 00:26:01.010 "compare_and_write": false, 00:26:01.010 "abort": false, 00:26:01.010 "seek_hole": false, 00:26:01.010 "seek_data": false, 00:26:01.011 "copy": false, 00:26:01.011 "nvme_iov_md": false 00:26:01.011 }, 00:26:01.011 "memory_domains": [ 00:26:01.011 { 00:26:01.011 "dma_device_id": "system", 00:26:01.011 "dma_device_type": 1 00:26:01.011 }, 00:26:01.011 { 00:26:01.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:01.011 "dma_device_type": 2 00:26:01.011 }, 00:26:01.011 { 00:26:01.011 "dma_device_id": "system", 00:26:01.011 "dma_device_type": 1 00:26:01.011 }, 00:26:01.011 { 00:26:01.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:01.011 "dma_device_type": 2 00:26:01.011 }, 00:26:01.011 { 00:26:01.011 "dma_device_id": "system", 00:26:01.011 "dma_device_type": 1 00:26:01.011 }, 00:26:01.011 { 00:26:01.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:01.011 "dma_device_type": 2 00:26:01.011 }, 00:26:01.011 { 00:26:01.011 "dma_device_id": "system", 00:26:01.011 "dma_device_type": 1 00:26:01.011 }, 00:26:01.011 { 00:26:01.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:01.011 "dma_device_type": 2 00:26:01.011 } 00:26:01.011 ], 00:26:01.011 "driver_specific": { 00:26:01.011 "raid": { 00:26:01.011 "uuid": "6bba6fdf-1bcf-42c9-bfb8-c642ec64a6b5", 00:26:01.011 "strip_size_kb": 0, 00:26:01.011 "state": "online", 00:26:01.011 "raid_level": "raid1", 00:26:01.011 "superblock": true, 00:26:01.011 "num_base_bdevs": 4, 00:26:01.011 "num_base_bdevs_discovered": 4, 00:26:01.011 "num_base_bdevs_operational": 4, 00:26:01.011 "base_bdevs_list": [ 00:26:01.011 { 00:26:01.011 "name": "pt1", 00:26:01.011 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:01.011 "is_configured": true, 00:26:01.011 "data_offset": 2048, 00:26:01.011 "data_size": 63488 00:26:01.011 }, 00:26:01.011 { 00:26:01.011 "name": "pt2", 00:26:01.011 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:01.011 "is_configured": true, 00:26:01.011 "data_offset": 2048, 00:26:01.011 "data_size": 63488 00:26:01.011 }, 00:26:01.011 { 00:26:01.011 "name": "pt3", 00:26:01.011 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:01.011 "is_configured": true, 00:26:01.011 "data_offset": 2048, 00:26:01.011 "data_size": 63488 00:26:01.011 }, 00:26:01.011 { 00:26:01.011 "name": "pt4", 00:26:01.011 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:01.011 "is_configured": true, 00:26:01.011 "data_offset": 2048, 00:26:01.011 "data_size": 63488 00:26:01.011 } 00:26:01.011 ] 00:26:01.011 } 00:26:01.011 } 00:26:01.011 }' 00:26:01.011 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:01.011 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:01.011 pt2 00:26:01.011 pt3 00:26:01.011 pt4' 00:26:01.011 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:01.011 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:01.011 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:01.011 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:01.011 13:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.011 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:01.011 13:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.273 13:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.273 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:01.273 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:01.273 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:01.273 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:01.273 13:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:01.273 13:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.273 13:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.273 [2024-11-20 13:46:04.147012] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:01.273 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6bba6fdf-1bcf-42c9-bfb8-c642ec64a6b5 '!=' 6bba6fdf-1bcf-42c9-bfb8-c642ec64a6b5 ']' 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.544 [2024-11-20 13:46:04.190712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:01.544 "name": "raid_bdev1", 00:26:01.544 "uuid": "6bba6fdf-1bcf-42c9-bfb8-c642ec64a6b5", 00:26:01.544 "strip_size_kb": 0, 00:26:01.544 "state": "online", 00:26:01.544 "raid_level": "raid1", 00:26:01.544 "superblock": true, 00:26:01.544 "num_base_bdevs": 4, 00:26:01.544 "num_base_bdevs_discovered": 3, 00:26:01.544 "num_base_bdevs_operational": 3, 00:26:01.544 "base_bdevs_list": [ 00:26:01.544 { 00:26:01.544 "name": null, 00:26:01.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:01.544 "is_configured": false, 00:26:01.544 "data_offset": 0, 00:26:01.544 "data_size": 63488 00:26:01.544 }, 00:26:01.544 { 00:26:01.544 "name": "pt2", 00:26:01.544 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:01.544 "is_configured": true, 00:26:01.544 "data_offset": 2048, 00:26:01.544 "data_size": 63488 00:26:01.544 }, 00:26:01.544 { 00:26:01.544 "name": "pt3", 00:26:01.544 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:01.544 "is_configured": true, 00:26:01.544 "data_offset": 2048, 00:26:01.544 "data_size": 63488 00:26:01.544 }, 00:26:01.544 { 00:26:01.544 "name": "pt4", 00:26:01.544 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:01.544 "is_configured": true, 00:26:01.544 "data_offset": 2048, 00:26:01.544 "data_size": 63488 00:26:01.544 } 00:26:01.544 ] 00:26:01.544 }' 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:01.544 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.113 [2024-11-20 13:46:04.730752] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:02.113 [2024-11-20 13:46:04.730819] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:02.113 [2024-11-20 13:46:04.730979] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:02.113 [2024-11-20 13:46:04.731084] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:02.113 [2024-11-20 13:46:04.731100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.113 [2024-11-20 13:46:04.822747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:02.113 [2024-11-20 13:46:04.822842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:02.113 [2024-11-20 13:46:04.822878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:26:02.113 [2024-11-20 13:46:04.822892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:02.113 [2024-11-20 13:46:04.825946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:02.113 [2024-11-20 13:46:04.826002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:02.113 [2024-11-20 13:46:04.826119] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:02.113 [2024-11-20 13:46:04.826196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:02.113 pt2 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.113 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:02.113 "name": "raid_bdev1", 00:26:02.113 "uuid": "6bba6fdf-1bcf-42c9-bfb8-c642ec64a6b5", 00:26:02.113 "strip_size_kb": 0, 00:26:02.113 "state": "configuring", 00:26:02.113 "raid_level": "raid1", 00:26:02.113 "superblock": true, 00:26:02.113 "num_base_bdevs": 4, 00:26:02.113 "num_base_bdevs_discovered": 1, 00:26:02.113 "num_base_bdevs_operational": 3, 00:26:02.113 "base_bdevs_list": [ 00:26:02.113 { 00:26:02.113 "name": null, 00:26:02.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.113 "is_configured": false, 00:26:02.113 "data_offset": 2048, 00:26:02.113 "data_size": 63488 00:26:02.113 }, 00:26:02.113 { 00:26:02.114 "name": "pt2", 00:26:02.114 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:02.114 "is_configured": true, 00:26:02.114 "data_offset": 2048, 00:26:02.114 "data_size": 63488 00:26:02.114 }, 00:26:02.114 { 00:26:02.114 "name": null, 00:26:02.114 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:02.114 "is_configured": false, 00:26:02.114 "data_offset": 2048, 00:26:02.114 "data_size": 63488 00:26:02.114 }, 00:26:02.114 { 00:26:02.114 "name": null, 00:26:02.114 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:02.114 "is_configured": false, 00:26:02.114 "data_offset": 2048, 00:26:02.114 "data_size": 63488 00:26:02.114 } 00:26:02.114 ] 00:26:02.114 }' 00:26:02.114 13:46:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:02.114 13:46:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.682 [2024-11-20 13:46:05.371023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:02.682 [2024-11-20 13:46:05.371253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:02.682 [2024-11-20 13:46:05.371359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:26:02.682 [2024-11-20 13:46:05.371614] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:02.682 [2024-11-20 13:46:05.372323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:02.682 [2024-11-20 13:46:05.372355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:02.682 [2024-11-20 13:46:05.372462] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:02.682 [2024-11-20 13:46:05.372508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:02.682 pt3 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:02.682 "name": "raid_bdev1", 00:26:02.682 "uuid": "6bba6fdf-1bcf-42c9-bfb8-c642ec64a6b5", 00:26:02.682 "strip_size_kb": 0, 00:26:02.682 "state": "configuring", 00:26:02.682 "raid_level": "raid1", 00:26:02.682 "superblock": true, 00:26:02.682 "num_base_bdevs": 4, 00:26:02.682 "num_base_bdevs_discovered": 2, 00:26:02.682 "num_base_bdevs_operational": 3, 00:26:02.682 "base_bdevs_list": [ 00:26:02.682 { 00:26:02.682 "name": null, 00:26:02.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.682 "is_configured": false, 00:26:02.682 "data_offset": 2048, 00:26:02.682 "data_size": 63488 00:26:02.682 }, 00:26:02.682 { 00:26:02.682 "name": "pt2", 00:26:02.682 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:02.682 "is_configured": true, 00:26:02.682 "data_offset": 2048, 00:26:02.682 "data_size": 63488 00:26:02.682 }, 00:26:02.682 { 00:26:02.682 "name": "pt3", 00:26:02.682 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:02.682 "is_configured": true, 00:26:02.682 "data_offset": 2048, 00:26:02.682 "data_size": 63488 00:26:02.682 }, 00:26:02.682 { 00:26:02.682 "name": null, 00:26:02.682 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:02.682 "is_configured": false, 00:26:02.682 "data_offset": 2048, 00:26:02.682 "data_size": 63488 00:26:02.682 } 00:26:02.682 ] 00:26:02.682 }' 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:02.682 13:46:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.251 [2024-11-20 13:46:05.923221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:03.251 [2024-11-20 13:46:05.923478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:03.251 [2024-11-20 13:46:05.923571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:26:03.251 [2024-11-20 13:46:05.923745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:03.251 [2024-11-20 13:46:05.924440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:03.251 [2024-11-20 13:46:05.924625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:03.251 [2024-11-20 13:46:05.924762] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:03.251 [2024-11-20 13:46:05.924797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:03.251 [2024-11-20 13:46:05.924990] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:03.251 [2024-11-20 13:46:05.925007] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:03.251 [2024-11-20 13:46:05.925326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:26:03.251 [2024-11-20 13:46:05.925502] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:03.251 [2024-11-20 13:46:05.925522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:26:03.251 [2024-11-20 13:46:05.925698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:03.251 pt4 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:03.251 "name": "raid_bdev1", 00:26:03.251 "uuid": "6bba6fdf-1bcf-42c9-bfb8-c642ec64a6b5", 00:26:03.251 "strip_size_kb": 0, 00:26:03.251 "state": "online", 00:26:03.251 "raid_level": "raid1", 00:26:03.251 "superblock": true, 00:26:03.251 "num_base_bdevs": 4, 00:26:03.251 "num_base_bdevs_discovered": 3, 00:26:03.251 "num_base_bdevs_operational": 3, 00:26:03.251 "base_bdevs_list": [ 00:26:03.251 { 00:26:03.251 "name": null, 00:26:03.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.251 "is_configured": false, 00:26:03.251 "data_offset": 2048, 00:26:03.251 "data_size": 63488 00:26:03.251 }, 00:26:03.251 { 00:26:03.251 "name": "pt2", 00:26:03.251 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:03.251 "is_configured": true, 00:26:03.251 "data_offset": 2048, 00:26:03.251 "data_size": 63488 00:26:03.251 }, 00:26:03.251 { 00:26:03.251 "name": "pt3", 00:26:03.251 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:03.251 "is_configured": true, 00:26:03.251 "data_offset": 2048, 00:26:03.251 "data_size": 63488 00:26:03.251 }, 00:26:03.251 { 00:26:03.251 "name": "pt4", 00:26:03.251 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:03.251 "is_configured": true, 00:26:03.251 "data_offset": 2048, 00:26:03.251 "data_size": 63488 00:26:03.251 } 00:26:03.251 ] 00:26:03.251 }' 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:03.251 13:46:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.819 [2024-11-20 13:46:06.467402] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:03.819 [2024-11-20 13:46:06.467438] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:03.819 [2024-11-20 13:46:06.467537] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:03.819 [2024-11-20 13:46:06.467658] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:03.819 [2024-11-20 13:46:06.467683] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.819 [2024-11-20 13:46:06.543416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:03.819 [2024-11-20 13:46:06.543643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:03.819 [2024-11-20 13:46:06.543680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:26:03.819 [2024-11-20 13:46:06.543702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:03.819 [2024-11-20 13:46:06.546769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:03.819 pt1 00:26:03.819 [2024-11-20 13:46:06.547006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:03.819 [2024-11-20 13:46:06.547132] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:03.819 [2024-11-20 13:46:06.547199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:03.819 [2024-11-20 13:46:06.547417] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:03.819 [2024-11-20 13:46:06.547442] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:03.819 [2024-11-20 13:46:06.547463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:26:03.819 [2024-11-20 13:46:06.547538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:03.819 [2024-11-20 13:46:06.547775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:03.819 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:03.820 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:03.820 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:03.820 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:03.820 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:03.820 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:03.820 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:03.820 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:03.820 13:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.820 13:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:03.820 13:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.820 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:03.820 "name": "raid_bdev1", 00:26:03.820 "uuid": "6bba6fdf-1bcf-42c9-bfb8-c642ec64a6b5", 00:26:03.820 "strip_size_kb": 0, 00:26:03.820 "state": "configuring", 00:26:03.820 "raid_level": "raid1", 00:26:03.820 "superblock": true, 00:26:03.820 "num_base_bdevs": 4, 00:26:03.820 "num_base_bdevs_discovered": 2, 00:26:03.820 "num_base_bdevs_operational": 3, 00:26:03.820 "base_bdevs_list": [ 00:26:03.820 { 00:26:03.820 "name": null, 00:26:03.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:03.820 "is_configured": false, 00:26:03.820 "data_offset": 2048, 00:26:03.820 "data_size": 63488 00:26:03.820 }, 00:26:03.820 { 00:26:03.820 "name": "pt2", 00:26:03.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:03.820 "is_configured": true, 00:26:03.820 "data_offset": 2048, 00:26:03.820 "data_size": 63488 00:26:03.820 }, 00:26:03.820 { 00:26:03.820 "name": "pt3", 00:26:03.820 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:03.820 "is_configured": true, 00:26:03.820 "data_offset": 2048, 00:26:03.820 "data_size": 63488 00:26:03.820 }, 00:26:03.820 { 00:26:03.820 "name": null, 00:26:03.820 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:03.820 "is_configured": false, 00:26:03.820 "data_offset": 2048, 00:26:03.820 "data_size": 63488 00:26:03.820 } 00:26:03.820 ] 00:26:03.820 }' 00:26:03.820 13:46:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:03.820 13:46:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.387 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:26:04.387 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:04.387 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.387 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.387 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.387 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:26:04.387 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:04.387 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.387 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.387 [2024-11-20 13:46:07.135729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:04.387 [2024-11-20 13:46:07.135964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:04.387 [2024-11-20 13:46:07.136045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:26:04.387 [2024-11-20 13:46:07.136308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:04.387 [2024-11-20 13:46:07.136933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:04.387 [2024-11-20 13:46:07.136963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:04.387 [2024-11-20 13:46:07.137073] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:04.387 [2024-11-20 13:46:07.137106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:04.387 [2024-11-20 13:46:07.137283] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:26:04.387 [2024-11-20 13:46:07.137305] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:04.387 [2024-11-20 13:46:07.137637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:26:04.387 [2024-11-20 13:46:07.137823] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:26:04.387 [2024-11-20 13:46:07.137850] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:26:04.387 [2024-11-20 13:46:07.138045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:04.387 pt4 00:26:04.387 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.388 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:04.388 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:04.388 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:04.388 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:04.388 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:04.388 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:04.388 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:04.388 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:04.388 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:04.388 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:04.388 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:04.388 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.388 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.388 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.388 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.388 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:04.388 "name": "raid_bdev1", 00:26:04.388 "uuid": "6bba6fdf-1bcf-42c9-bfb8-c642ec64a6b5", 00:26:04.388 "strip_size_kb": 0, 00:26:04.388 "state": "online", 00:26:04.388 "raid_level": "raid1", 00:26:04.388 "superblock": true, 00:26:04.388 "num_base_bdevs": 4, 00:26:04.388 "num_base_bdevs_discovered": 3, 00:26:04.388 "num_base_bdevs_operational": 3, 00:26:04.388 "base_bdevs_list": [ 00:26:04.388 { 00:26:04.388 "name": null, 00:26:04.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.388 "is_configured": false, 00:26:04.388 "data_offset": 2048, 00:26:04.388 "data_size": 63488 00:26:04.388 }, 00:26:04.388 { 00:26:04.388 "name": "pt2", 00:26:04.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:04.388 "is_configured": true, 00:26:04.388 "data_offset": 2048, 00:26:04.388 "data_size": 63488 00:26:04.388 }, 00:26:04.388 { 00:26:04.388 "name": "pt3", 00:26:04.388 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:04.388 "is_configured": true, 00:26:04.388 "data_offset": 2048, 00:26:04.388 "data_size": 63488 00:26:04.388 }, 00:26:04.388 { 00:26:04.388 "name": "pt4", 00:26:04.388 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:04.388 "is_configured": true, 00:26:04.388 "data_offset": 2048, 00:26:04.388 "data_size": 63488 00:26:04.388 } 00:26:04.388 ] 00:26:04.388 }' 00:26:04.388 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:04.388 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.955 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:26:04.955 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.955 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.955 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:04.955 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.955 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:26:04.955 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:26:04.955 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:04.955 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.955 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.955 [2024-11-20 13:46:07.724315] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:04.955 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.955 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 6bba6fdf-1bcf-42c9-bfb8-c642ec64a6b5 '!=' 6bba6fdf-1bcf-42c9-bfb8-c642ec64a6b5 ']' 00:26:04.955 13:46:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74894 00:26:04.955 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74894 ']' 00:26:04.955 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74894 00:26:04.955 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:26:04.955 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:04.955 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74894 00:26:04.955 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:04.955 killing process with pid 74894 00:26:04.955 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:04.955 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74894' 00:26:04.955 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74894 00:26:04.955 [2024-11-20 13:46:07.830387] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:04.955 13:46:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74894 00:26:04.955 [2024-11-20 13:46:07.830522] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:04.955 [2024-11-20 13:46:07.830623] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:04.955 [2024-11-20 13:46:07.830643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:26:05.520 [2024-11-20 13:46:08.192222] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:06.455 13:46:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:26:06.455 00:26:06.455 real 0m9.747s 00:26:06.455 user 0m16.023s 00:26:06.455 sys 0m1.473s 00:26:06.455 ************************************ 00:26:06.455 END TEST raid_superblock_test 00:26:06.455 ************************************ 00:26:06.455 13:46:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:06.455 13:46:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.455 13:46:09 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:26:06.455 13:46:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:06.455 13:46:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:06.455 13:46:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:06.455 ************************************ 00:26:06.455 START TEST raid_read_error_test 00:26:06.455 ************************************ 00:26:06.455 13:46:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:26:06.455 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:26:06.455 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:26:06.455 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:26:06.455 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:26:06.455 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:06.455 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:26:06.455 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:06.455 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:06.455 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:26:06.455 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:26:06.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xk5pGHu4oK 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75398 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75398 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75398 ']' 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:06.456 13:46:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.714 [2024-11-20 13:46:09.435488] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:26:06.714 [2024-11-20 13:46:09.435981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75398 ] 00:26:06.714 [2024-11-20 13:46:09.622536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.973 [2024-11-20 13:46:09.757031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.231 [2024-11-20 13:46:09.955866] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:07.231 [2024-11-20 13:46:09.956258] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:07.797 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.798 BaseBdev1_malloc 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.798 true 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.798 [2024-11-20 13:46:10.501748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:07.798 [2024-11-20 13:46:10.502010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:07.798 [2024-11-20 13:46:10.502085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:07.798 [2024-11-20 13:46:10.502259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:07.798 [2024-11-20 13:46:10.505157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:07.798 [2024-11-20 13:46:10.505206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:07.798 BaseBdev1 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.798 BaseBdev2_malloc 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.798 true 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.798 [2024-11-20 13:46:10.563542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:07.798 [2024-11-20 13:46:10.563837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:07.798 [2024-11-20 13:46:10.563881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:07.798 [2024-11-20 13:46:10.563898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:07.798 [2024-11-20 13:46:10.567003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:07.798 [2024-11-20 13:46:10.567077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:07.798 BaseBdev2 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.798 BaseBdev3_malloc 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.798 true 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.798 [2024-11-20 13:46:10.640805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:07.798 [2024-11-20 13:46:10.640888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:07.798 [2024-11-20 13:46:10.640941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:26:07.798 [2024-11-20 13:46:10.640976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:07.798 [2024-11-20 13:46:10.643988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:07.798 [2024-11-20 13:46:10.644033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:07.798 BaseBdev3 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.798 BaseBdev4_malloc 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.798 true 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.798 [2024-11-20 13:46:10.703619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:07.798 [2024-11-20 13:46:10.703819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:07.798 [2024-11-20 13:46:10.703915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:07.798 [2024-11-20 13:46:10.703942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:07.798 [2024-11-20 13:46:10.706776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:07.798 [2024-11-20 13:46:10.706828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:07.798 BaseBdev4 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.798 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:07.798 [2024-11-20 13:46:10.711768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:08.056 [2024-11-20 13:46:10.714437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:08.056 [2024-11-20 13:46:10.714695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:08.056 [2024-11-20 13:46:10.714841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:08.056 [2024-11-20 13:46:10.715214] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:26:08.056 [2024-11-20 13:46:10.715371] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:08.056 [2024-11-20 13:46:10.715726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:26:08.056 [2024-11-20 13:46:10.716089] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:26:08.056 [2024-11-20 13:46:10.716213] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:26:08.056 [2024-11-20 13:46:10.716624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:08.056 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.056 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:08.056 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:08.056 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:08.056 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:08.056 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:08.056 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:08.056 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:08.056 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:08.057 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:08.057 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:08.057 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:08.057 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.057 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.057 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:08.057 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.057 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:08.057 "name": "raid_bdev1", 00:26:08.057 "uuid": "9544ce20-4499-44be-a46b-879aa0ed0bdc", 00:26:08.057 "strip_size_kb": 0, 00:26:08.057 "state": "online", 00:26:08.057 "raid_level": "raid1", 00:26:08.057 "superblock": true, 00:26:08.057 "num_base_bdevs": 4, 00:26:08.057 "num_base_bdevs_discovered": 4, 00:26:08.057 "num_base_bdevs_operational": 4, 00:26:08.057 "base_bdevs_list": [ 00:26:08.057 { 00:26:08.057 "name": "BaseBdev1", 00:26:08.057 "uuid": "3d9214f9-1f09-5ea7-80c2-d208f688aef5", 00:26:08.057 "is_configured": true, 00:26:08.057 "data_offset": 2048, 00:26:08.057 "data_size": 63488 00:26:08.057 }, 00:26:08.057 { 00:26:08.057 "name": "BaseBdev2", 00:26:08.057 "uuid": "d9f23ae6-76c4-59e5-9813-2054e37eadd4", 00:26:08.057 "is_configured": true, 00:26:08.057 "data_offset": 2048, 00:26:08.057 "data_size": 63488 00:26:08.057 }, 00:26:08.057 { 00:26:08.057 "name": "BaseBdev3", 00:26:08.057 "uuid": "85a2759a-2157-5ab9-9d22-b868e985ab7f", 00:26:08.057 "is_configured": true, 00:26:08.057 "data_offset": 2048, 00:26:08.057 "data_size": 63488 00:26:08.057 }, 00:26:08.057 { 00:26:08.057 "name": "BaseBdev4", 00:26:08.057 "uuid": "4d4e665c-0213-5a1e-bf50-38a97e91eea1", 00:26:08.057 "is_configured": true, 00:26:08.057 "data_offset": 2048, 00:26:08.057 "data_size": 63488 00:26:08.057 } 00:26:08.057 ] 00:26:08.057 }' 00:26:08.057 13:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:08.057 13:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.623 13:46:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:26:08.623 13:46:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:26:08.623 [2024-11-20 13:46:11.370169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.560 13:46:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:09.560 "name": "raid_bdev1", 00:26:09.560 "uuid": "9544ce20-4499-44be-a46b-879aa0ed0bdc", 00:26:09.560 "strip_size_kb": 0, 00:26:09.560 "state": "online", 00:26:09.560 "raid_level": "raid1", 00:26:09.560 "superblock": true, 00:26:09.560 "num_base_bdevs": 4, 00:26:09.560 "num_base_bdevs_discovered": 4, 00:26:09.560 "num_base_bdevs_operational": 4, 00:26:09.560 "base_bdevs_list": [ 00:26:09.560 { 00:26:09.560 "name": "BaseBdev1", 00:26:09.560 "uuid": "3d9214f9-1f09-5ea7-80c2-d208f688aef5", 00:26:09.560 "is_configured": true, 00:26:09.560 "data_offset": 2048, 00:26:09.560 "data_size": 63488 00:26:09.560 }, 00:26:09.560 { 00:26:09.560 "name": "BaseBdev2", 00:26:09.560 "uuid": "d9f23ae6-76c4-59e5-9813-2054e37eadd4", 00:26:09.560 "is_configured": true, 00:26:09.560 "data_offset": 2048, 00:26:09.560 "data_size": 63488 00:26:09.560 }, 00:26:09.560 { 00:26:09.560 "name": "BaseBdev3", 00:26:09.560 "uuid": "85a2759a-2157-5ab9-9d22-b868e985ab7f", 00:26:09.560 "is_configured": true, 00:26:09.560 "data_offset": 2048, 00:26:09.560 "data_size": 63488 00:26:09.560 }, 00:26:09.560 { 00:26:09.560 "name": "BaseBdev4", 00:26:09.560 "uuid": "4d4e665c-0213-5a1e-bf50-38a97e91eea1", 00:26:09.561 "is_configured": true, 00:26:09.561 "data_offset": 2048, 00:26:09.561 "data_size": 63488 00:26:09.561 } 00:26:09.561 ] 00:26:09.561 }' 00:26:09.561 13:46:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:09.561 13:46:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.129 13:46:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:10.129 13:46:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.129 13:46:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.129 [2024-11-20 13:46:12.798793] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:10.129 [2024-11-20 13:46:12.799012] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:10.129 [2024-11-20 13:46:12.802895] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:10.129 { 00:26:10.129 "results": [ 00:26:10.129 { 00:26:10.129 "job": "raid_bdev1", 00:26:10.129 "core_mask": "0x1", 00:26:10.129 "workload": "randrw", 00:26:10.129 "percentage": 50, 00:26:10.129 "status": "finished", 00:26:10.129 "queue_depth": 1, 00:26:10.129 "io_size": 131072, 00:26:10.129 "runtime": 1.42646, 00:26:10.129 "iops": 7259.2291406699105, 00:26:10.129 "mibps": 907.4036425837388, 00:26:10.129 "io_failed": 0, 00:26:10.129 "io_timeout": 0, 00:26:10.129 "avg_latency_us": 133.30001246652913, 00:26:10.129 "min_latency_us": 38.86545454545455, 00:26:10.129 "max_latency_us": 1861.8181818181818 00:26:10.129 } 00:26:10.129 ], 00:26:10.129 "core_count": 1 00:26:10.129 } 00:26:10.129 [2024-11-20 13:46:12.803166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:10.129 [2024-11-20 13:46:12.803351] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:10.129 [2024-11-20 13:46:12.803375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:26:10.129 13:46:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.129 13:46:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75398 00:26:10.129 13:46:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75398 ']' 00:26:10.129 13:46:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75398 00:26:10.129 13:46:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:26:10.129 13:46:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:10.129 13:46:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75398 00:26:10.129 13:46:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:10.129 13:46:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:10.129 13:46:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75398' 00:26:10.129 killing process with pid 75398 00:26:10.129 13:46:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75398 00:26:10.129 [2024-11-20 13:46:12.844994] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:10.129 13:46:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75398 00:26:10.388 [2024-11-20 13:46:13.141914] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:11.764 13:46:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xk5pGHu4oK 00:26:11.765 13:46:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:26:11.765 13:46:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:26:11.765 13:46:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:26:11.765 13:46:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:26:11.765 13:46:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:11.765 13:46:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:26:11.765 13:46:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:26:11.765 ************************************ 00:26:11.765 END TEST raid_read_error_test 00:26:11.765 ************************************ 00:26:11.765 00:26:11.765 real 0m5.036s 00:26:11.765 user 0m6.170s 00:26:11.765 sys 0m0.639s 00:26:11.765 13:46:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:11.765 13:46:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.765 13:46:14 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:26:11.765 13:46:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:11.765 13:46:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:11.765 13:46:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:11.765 ************************************ 00:26:11.765 START TEST raid_write_error_test 00:26:11.765 ************************************ 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TSIpDp9hiI 00:26:11.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75544 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75544 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75544 ']' 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:11.765 13:46:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.765 [2024-11-20 13:46:14.529398] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:26:11.765 [2024-11-20 13:46:14.529600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75544 ] 00:26:12.023 [2024-11-20 13:46:14.722313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.023 [2024-11-20 13:46:14.881608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.282 [2024-11-20 13:46:15.095836] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:12.282 [2024-11-20 13:46:15.095945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.851 BaseBdev1_malloc 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.851 true 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.851 [2024-11-20 13:46:15.634569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:12.851 [2024-11-20 13:46:15.634642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:12.851 [2024-11-20 13:46:15.634672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:12.851 [2024-11-20 13:46:15.634690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:12.851 [2024-11-20 13:46:15.637685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:12.851 [2024-11-20 13:46:15.637750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:12.851 BaseBdev1 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.851 BaseBdev2_malloc 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.851 true 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.851 [2024-11-20 13:46:15.698323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:12.851 [2024-11-20 13:46:15.698406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:12.851 [2024-11-20 13:46:15.698432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:12.851 [2024-11-20 13:46:15.698457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:12.851 [2024-11-20 13:46:15.701414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:12.851 [2024-11-20 13:46:15.701486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:12.851 BaseBdev2 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.851 BaseBdev3_malloc 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.851 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.110 true 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.110 [2024-11-20 13:46:15.769891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:13.110 [2024-11-20 13:46:15.770124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:13.110 [2024-11-20 13:46:15.770175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:26:13.110 [2024-11-20 13:46:15.770217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:13.110 [2024-11-20 13:46:15.773338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:13.110 [2024-11-20 13:46:15.773384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:13.110 BaseBdev3 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.110 BaseBdev4_malloc 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.110 true 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.110 [2024-11-20 13:46:15.829822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:13.110 [2024-11-20 13:46:15.829922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:13.110 [2024-11-20 13:46:15.829952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:13.110 [2024-11-20 13:46:15.829979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:13.110 [2024-11-20 13:46:15.833806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:13.110 [2024-11-20 13:46:15.833865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:13.110 BaseBdev4 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.110 [2024-11-20 13:46:15.838165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:13.110 [2024-11-20 13:46:15.840819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:13.110 [2024-11-20 13:46:15.841082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:13.110 [2024-11-20 13:46:15.841330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:13.110 [2024-11-20 13:46:15.841789] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:26:13.110 [2024-11-20 13:46:15.841824] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:13.110 [2024-11-20 13:46:15.842166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:26:13.110 [2024-11-20 13:46:15.842395] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:26:13.110 [2024-11-20 13:46:15.842410] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:26:13.110 [2024-11-20 13:46:15.842692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.110 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:13.110 "name": "raid_bdev1", 00:26:13.110 "uuid": "14641aa3-fe32-4038-ba06-45b044325e85", 00:26:13.110 "strip_size_kb": 0, 00:26:13.110 "state": "online", 00:26:13.110 "raid_level": "raid1", 00:26:13.110 "superblock": true, 00:26:13.110 "num_base_bdevs": 4, 00:26:13.110 "num_base_bdevs_discovered": 4, 00:26:13.110 "num_base_bdevs_operational": 4, 00:26:13.110 "base_bdevs_list": [ 00:26:13.110 { 00:26:13.110 "name": "BaseBdev1", 00:26:13.110 "uuid": "342077ae-0774-5542-9b36-48f3b01da20f", 00:26:13.110 "is_configured": true, 00:26:13.110 "data_offset": 2048, 00:26:13.110 "data_size": 63488 00:26:13.110 }, 00:26:13.110 { 00:26:13.110 "name": "BaseBdev2", 00:26:13.110 "uuid": "1729ae75-b273-5373-90dc-9cb4906c7e06", 00:26:13.110 "is_configured": true, 00:26:13.110 "data_offset": 2048, 00:26:13.110 "data_size": 63488 00:26:13.110 }, 00:26:13.110 { 00:26:13.110 "name": "BaseBdev3", 00:26:13.110 "uuid": "bed733fd-e142-5099-a0ad-36b70e9a0ad8", 00:26:13.110 "is_configured": true, 00:26:13.110 "data_offset": 2048, 00:26:13.110 "data_size": 63488 00:26:13.110 }, 00:26:13.110 { 00:26:13.110 "name": "BaseBdev4", 00:26:13.111 "uuid": "0441ced4-61a1-5525-a67f-3512eb06eda6", 00:26:13.111 "is_configured": true, 00:26:13.111 "data_offset": 2048, 00:26:13.111 "data_size": 63488 00:26:13.111 } 00:26:13.111 ] 00:26:13.111 }' 00:26:13.111 13:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:13.111 13:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.678 13:46:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:26:13.678 13:46:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:26:13.678 [2024-11-20 13:46:16.484414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.614 [2024-11-20 13:46:17.366466] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:26:14.614 [2024-11-20 13:46:17.366700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:14.614 [2024-11-20 13:46:17.367014] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:14.614 "name": "raid_bdev1", 00:26:14.614 "uuid": "14641aa3-fe32-4038-ba06-45b044325e85", 00:26:14.614 "strip_size_kb": 0, 00:26:14.614 "state": "online", 00:26:14.614 "raid_level": "raid1", 00:26:14.614 "superblock": true, 00:26:14.614 "num_base_bdevs": 4, 00:26:14.614 "num_base_bdevs_discovered": 3, 00:26:14.614 "num_base_bdevs_operational": 3, 00:26:14.614 "base_bdevs_list": [ 00:26:14.614 { 00:26:14.614 "name": null, 00:26:14.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.614 "is_configured": false, 00:26:14.614 "data_offset": 0, 00:26:14.614 "data_size": 63488 00:26:14.614 }, 00:26:14.614 { 00:26:14.614 "name": "BaseBdev2", 00:26:14.614 "uuid": "1729ae75-b273-5373-90dc-9cb4906c7e06", 00:26:14.614 "is_configured": true, 00:26:14.614 "data_offset": 2048, 00:26:14.614 "data_size": 63488 00:26:14.614 }, 00:26:14.614 { 00:26:14.614 "name": "BaseBdev3", 00:26:14.614 "uuid": "bed733fd-e142-5099-a0ad-36b70e9a0ad8", 00:26:14.614 "is_configured": true, 00:26:14.614 "data_offset": 2048, 00:26:14.614 "data_size": 63488 00:26:14.614 }, 00:26:14.614 { 00:26:14.614 "name": "BaseBdev4", 00:26:14.614 "uuid": "0441ced4-61a1-5525-a67f-3512eb06eda6", 00:26:14.614 "is_configured": true, 00:26:14.614 "data_offset": 2048, 00:26:14.614 "data_size": 63488 00:26:14.614 } 00:26:14.614 ] 00:26:14.614 }' 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:14.614 13:46:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.182 13:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:15.182 13:46:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.182 13:46:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.182 [2024-11-20 13:46:17.912657] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:15.182 [2024-11-20 13:46:17.912824] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:15.182 { 00:26:15.182 "results": [ 00:26:15.182 { 00:26:15.182 "job": "raid_bdev1", 00:26:15.182 "core_mask": "0x1", 00:26:15.182 "workload": "randrw", 00:26:15.182 "percentage": 50, 00:26:15.182 "status": "finished", 00:26:15.182 "queue_depth": 1, 00:26:15.182 "io_size": 131072, 00:26:15.182 "runtime": 1.425848, 00:26:15.182 "iops": 7977.708703873063, 00:26:15.182 "mibps": 997.2135879841329, 00:26:15.182 "io_failed": 0, 00:26:15.182 "io_timeout": 0, 00:26:15.182 "avg_latency_us": 120.80246665334666, 00:26:15.182 "min_latency_us": 39.09818181818182, 00:26:15.182 "max_latency_us": 2040.5527272727272 00:26:15.182 } 00:26:15.182 ], 00:26:15.182 "core_count": 1 00:26:15.182 } 00:26:15.182 [2024-11-20 13:46:17.916446] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:15.182 [2024-11-20 13:46:17.916504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:15.182 [2024-11-20 13:46:17.916698] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:15.182 [2024-11-20 13:46:17.916721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:26:15.182 13:46:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.182 13:46:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75544 00:26:15.182 13:46:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75544 ']' 00:26:15.182 13:46:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75544 00:26:15.182 13:46:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:26:15.182 13:46:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:15.182 13:46:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75544 00:26:15.182 killing process with pid 75544 00:26:15.182 13:46:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:15.182 13:46:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:15.182 13:46:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75544' 00:26:15.182 13:46:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75544 00:26:15.182 [2024-11-20 13:46:17.955984] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:15.182 13:46:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75544 00:26:15.440 [2024-11-20 13:46:18.257142] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:16.833 13:46:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TSIpDp9hiI 00:26:16.833 13:46:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:26:16.833 13:46:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:26:16.833 ************************************ 00:26:16.833 END TEST raid_write_error_test 00:26:16.833 ************************************ 00:26:16.833 13:46:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:26:16.833 13:46:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:26:16.833 13:46:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:16.833 13:46:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:26:16.833 13:46:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:26:16.833 00:26:16.833 real 0m4.999s 00:26:16.833 user 0m6.201s 00:26:16.833 sys 0m0.616s 00:26:16.833 13:46:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:16.833 13:46:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.833 13:46:19 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:26:16.833 13:46:19 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:26:16.833 13:46:19 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:26:16.833 13:46:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:26:16.833 13:46:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:16.833 13:46:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:16.833 ************************************ 00:26:16.833 START TEST raid_rebuild_test 00:26:16.833 ************************************ 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:26:16.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75693 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75693 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75693 ']' 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:16.833 13:46:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.833 [2024-11-20 13:46:19.594715] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:26:16.833 [2024-11-20 13:46:19.595185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75693 ] 00:26:16.833 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:16.833 Zero copy mechanism will not be used. 00:26:17.115 [2024-11-20 13:46:19.780772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.115 [2024-11-20 13:46:19.907396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.372 [2024-11-20 13:46:20.116425] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:17.372 [2024-11-20 13:46:20.116475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.940 BaseBdev1_malloc 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.940 [2024-11-20 13:46:20.633222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:17.940 [2024-11-20 13:46:20.633444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:17.940 [2024-11-20 13:46:20.633523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:17.940 [2024-11-20 13:46:20.633659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:17.940 [2024-11-20 13:46:20.636544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:17.940 [2024-11-20 13:46:20.636734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:17.940 BaseBdev1 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.940 BaseBdev2_malloc 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.940 [2024-11-20 13:46:20.690017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:17.940 [2024-11-20 13:46:20.690098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:17.940 [2024-11-20 13:46:20.690133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:17.940 [2024-11-20 13:46:20.690164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:17.940 [2024-11-20 13:46:20.692921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:17.940 [2024-11-20 13:46:20.693150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:17.940 BaseBdev2 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.940 spare_malloc 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.940 spare_delay 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.940 [2024-11-20 13:46:20.756698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:17.940 [2024-11-20 13:46:20.756971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:17.940 [2024-11-20 13:46:20.757109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:26:17.940 [2024-11-20 13:46:20.757142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:17.940 [2024-11-20 13:46:20.760059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:17.940 [2024-11-20 13:46:20.760126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:17.940 spare 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.940 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.940 [2024-11-20 13:46:20.764936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:17.940 [2024-11-20 13:46:20.767572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:17.940 [2024-11-20 13:46:20.767958] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:17.940 [2024-11-20 13:46:20.767989] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:26:17.940 [2024-11-20 13:46:20.768361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:17.940 [2024-11-20 13:46:20.768552] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:17.940 [2024-11-20 13:46:20.768570] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:17.941 [2024-11-20 13:46:20.768733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:17.941 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.941 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:17.941 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:17.941 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:17.941 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:17.941 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:17.941 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:17.941 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:17.941 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:17.941 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:17.941 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:17.941 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:17.941 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.941 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:17.941 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.941 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.941 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:17.941 "name": "raid_bdev1", 00:26:17.941 "uuid": "1b8a1356-5e48-4272-9f19-aff6eca474dc", 00:26:17.941 "strip_size_kb": 0, 00:26:17.941 "state": "online", 00:26:17.941 "raid_level": "raid1", 00:26:17.941 "superblock": false, 00:26:17.941 "num_base_bdevs": 2, 00:26:17.941 "num_base_bdevs_discovered": 2, 00:26:17.941 "num_base_bdevs_operational": 2, 00:26:17.941 "base_bdevs_list": [ 00:26:17.941 { 00:26:17.941 "name": "BaseBdev1", 00:26:17.941 "uuid": "204e9c8f-063a-5392-8779-9430f7705fcf", 00:26:17.941 "is_configured": true, 00:26:17.941 "data_offset": 0, 00:26:17.941 "data_size": 65536 00:26:17.941 }, 00:26:17.941 { 00:26:17.941 "name": "BaseBdev2", 00:26:17.941 "uuid": "22a68b45-3b1c-5c46-9e09-94a48f9b6cc9", 00:26:17.941 "is_configured": true, 00:26:17.941 "data_offset": 0, 00:26:17.941 "data_size": 65536 00:26:17.941 } 00:26:17.941 ] 00:26:17.941 }' 00:26:17.941 13:46:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:17.941 13:46:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:26:18.508 [2024-11-20 13:46:21.285484] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:18.508 13:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:18.766 [2024-11-20 13:46:21.633304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:18.766 /dev/nbd0 00:26:18.766 13:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:18.766 13:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:18.766 13:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:18.766 13:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:26:18.766 13:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:18.766 13:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:18.766 13:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:18.766 13:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:26:18.766 13:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:18.766 13:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:18.766 13:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:18.766 1+0 records in 00:26:18.766 1+0 records out 00:26:18.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266052 s, 15.4 MB/s 00:26:19.025 13:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:19.025 13:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:26:19.025 13:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:19.025 13:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:19.025 13:46:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:26:19.025 13:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:19.025 13:46:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:19.025 13:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:26:19.025 13:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:26:19.025 13:46:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:26:25.590 65536+0 records in 00:26:25.590 65536+0 records out 00:26:25.590 33554432 bytes (34 MB, 32 MiB) copied, 6.63914 s, 5.1 MB/s 00:26:25.590 13:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:26:25.590 13:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:25.590 13:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:25.590 13:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:25.590 13:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:25.590 13:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:25.590 13:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:26:25.850 [2024-11-20 13:46:28.628621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.850 [2024-11-20 13:46:28.662081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:25.850 "name": "raid_bdev1", 00:26:25.850 "uuid": "1b8a1356-5e48-4272-9f19-aff6eca474dc", 00:26:25.850 "strip_size_kb": 0, 00:26:25.850 "state": "online", 00:26:25.850 "raid_level": "raid1", 00:26:25.850 "superblock": false, 00:26:25.850 "num_base_bdevs": 2, 00:26:25.850 "num_base_bdevs_discovered": 1, 00:26:25.850 "num_base_bdevs_operational": 1, 00:26:25.850 "base_bdevs_list": [ 00:26:25.850 { 00:26:25.850 "name": null, 00:26:25.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.850 "is_configured": false, 00:26:25.850 "data_offset": 0, 00:26:25.850 "data_size": 65536 00:26:25.850 }, 00:26:25.850 { 00:26:25.850 "name": "BaseBdev2", 00:26:25.850 "uuid": "22a68b45-3b1c-5c46-9e09-94a48f9b6cc9", 00:26:25.850 "is_configured": true, 00:26:25.850 "data_offset": 0, 00:26:25.850 "data_size": 65536 00:26:25.850 } 00:26:25.850 ] 00:26:25.850 }' 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:25.850 13:46:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.419 13:46:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:26.419 13:46:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.419 13:46:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.419 [2024-11-20 13:46:29.182250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:26.419 [2024-11-20 13:46:29.198708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:26:26.419 13:46:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.419 13:46:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:26:26.419 [2024-11-20 13:46:29.201509] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:27.354 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:27.354 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:27.354 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:27.354 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:27.354 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:27.354 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:27.354 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.354 13:46:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.355 13:46:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.355 13:46:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.355 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:27.355 "name": "raid_bdev1", 00:26:27.355 "uuid": "1b8a1356-5e48-4272-9f19-aff6eca474dc", 00:26:27.355 "strip_size_kb": 0, 00:26:27.355 "state": "online", 00:26:27.355 "raid_level": "raid1", 00:26:27.355 "superblock": false, 00:26:27.355 "num_base_bdevs": 2, 00:26:27.355 "num_base_bdevs_discovered": 2, 00:26:27.355 "num_base_bdevs_operational": 2, 00:26:27.355 "process": { 00:26:27.355 "type": "rebuild", 00:26:27.355 "target": "spare", 00:26:27.355 "progress": { 00:26:27.355 "blocks": 20480, 00:26:27.355 "percent": 31 00:26:27.355 } 00:26:27.355 }, 00:26:27.355 "base_bdevs_list": [ 00:26:27.355 { 00:26:27.355 "name": "spare", 00:26:27.355 "uuid": "7ebaebaf-62fc-58f2-91ce-0d9a6e30d6d9", 00:26:27.355 "is_configured": true, 00:26:27.355 "data_offset": 0, 00:26:27.355 "data_size": 65536 00:26:27.355 }, 00:26:27.355 { 00:26:27.355 "name": "BaseBdev2", 00:26:27.355 "uuid": "22a68b45-3b1c-5c46-9e09-94a48f9b6cc9", 00:26:27.355 "is_configured": true, 00:26:27.355 "data_offset": 0, 00:26:27.355 "data_size": 65536 00:26:27.355 } 00:26:27.355 ] 00:26:27.355 }' 00:26:27.355 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.613 [2024-11-20 13:46:30.358821] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:27.613 [2024-11-20 13:46:30.410874] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:27.613 [2024-11-20 13:46:30.410989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:27.613 [2024-11-20 13:46:30.411016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:27.613 [2024-11-20 13:46:30.411033] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:27.613 "name": "raid_bdev1", 00:26:27.613 "uuid": "1b8a1356-5e48-4272-9f19-aff6eca474dc", 00:26:27.613 "strip_size_kb": 0, 00:26:27.613 "state": "online", 00:26:27.613 "raid_level": "raid1", 00:26:27.613 "superblock": false, 00:26:27.613 "num_base_bdevs": 2, 00:26:27.613 "num_base_bdevs_discovered": 1, 00:26:27.613 "num_base_bdevs_operational": 1, 00:26:27.613 "base_bdevs_list": [ 00:26:27.613 { 00:26:27.613 "name": null, 00:26:27.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:27.613 "is_configured": false, 00:26:27.613 "data_offset": 0, 00:26:27.613 "data_size": 65536 00:26:27.613 }, 00:26:27.613 { 00:26:27.613 "name": "BaseBdev2", 00:26:27.613 "uuid": "22a68b45-3b1c-5c46-9e09-94a48f9b6cc9", 00:26:27.613 "is_configured": true, 00:26:27.613 "data_offset": 0, 00:26:27.613 "data_size": 65536 00:26:27.613 } 00:26:27.613 ] 00:26:27.613 }' 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:27.613 13:46:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:28.209 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:28.209 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:28.209 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:28.210 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:28.210 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:28.210 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:28.210 13:46:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.210 13:46:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:28.210 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:28.210 13:46:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.210 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:28.210 "name": "raid_bdev1", 00:26:28.210 "uuid": "1b8a1356-5e48-4272-9f19-aff6eca474dc", 00:26:28.210 "strip_size_kb": 0, 00:26:28.210 "state": "online", 00:26:28.210 "raid_level": "raid1", 00:26:28.210 "superblock": false, 00:26:28.210 "num_base_bdevs": 2, 00:26:28.210 "num_base_bdevs_discovered": 1, 00:26:28.210 "num_base_bdevs_operational": 1, 00:26:28.210 "base_bdevs_list": [ 00:26:28.210 { 00:26:28.210 "name": null, 00:26:28.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.210 "is_configured": false, 00:26:28.210 "data_offset": 0, 00:26:28.210 "data_size": 65536 00:26:28.210 }, 00:26:28.210 { 00:26:28.210 "name": "BaseBdev2", 00:26:28.210 "uuid": "22a68b45-3b1c-5c46-9e09-94a48f9b6cc9", 00:26:28.210 "is_configured": true, 00:26:28.210 "data_offset": 0, 00:26:28.210 "data_size": 65536 00:26:28.210 } 00:26:28.210 ] 00:26:28.210 }' 00:26:28.210 13:46:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:28.210 13:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:28.210 13:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:28.210 13:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:28.210 13:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:28.210 13:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.210 13:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:28.210 [2024-11-20 13:46:31.091590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:28.210 [2024-11-20 13:46:31.107240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:26:28.210 13:46:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.210 13:46:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:26:28.210 [2024-11-20 13:46:31.109773] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:29.586 "name": "raid_bdev1", 00:26:29.586 "uuid": "1b8a1356-5e48-4272-9f19-aff6eca474dc", 00:26:29.586 "strip_size_kb": 0, 00:26:29.586 "state": "online", 00:26:29.586 "raid_level": "raid1", 00:26:29.586 "superblock": false, 00:26:29.586 "num_base_bdevs": 2, 00:26:29.586 "num_base_bdevs_discovered": 2, 00:26:29.586 "num_base_bdevs_operational": 2, 00:26:29.586 "process": { 00:26:29.586 "type": "rebuild", 00:26:29.586 "target": "spare", 00:26:29.586 "progress": { 00:26:29.586 "blocks": 20480, 00:26:29.586 "percent": 31 00:26:29.586 } 00:26:29.586 }, 00:26:29.586 "base_bdevs_list": [ 00:26:29.586 { 00:26:29.586 "name": "spare", 00:26:29.586 "uuid": "7ebaebaf-62fc-58f2-91ce-0d9a6e30d6d9", 00:26:29.586 "is_configured": true, 00:26:29.586 "data_offset": 0, 00:26:29.586 "data_size": 65536 00:26:29.586 }, 00:26:29.586 { 00:26:29.586 "name": "BaseBdev2", 00:26:29.586 "uuid": "22a68b45-3b1c-5c46-9e09-94a48f9b6cc9", 00:26:29.586 "is_configured": true, 00:26:29.586 "data_offset": 0, 00:26:29.586 "data_size": 65536 00:26:29.586 } 00:26:29.586 ] 00:26:29.586 }' 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=403 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:29.586 "name": "raid_bdev1", 00:26:29.586 "uuid": "1b8a1356-5e48-4272-9f19-aff6eca474dc", 00:26:29.586 "strip_size_kb": 0, 00:26:29.586 "state": "online", 00:26:29.586 "raid_level": "raid1", 00:26:29.586 "superblock": false, 00:26:29.586 "num_base_bdevs": 2, 00:26:29.586 "num_base_bdevs_discovered": 2, 00:26:29.586 "num_base_bdevs_operational": 2, 00:26:29.586 "process": { 00:26:29.586 "type": "rebuild", 00:26:29.586 "target": "spare", 00:26:29.586 "progress": { 00:26:29.586 "blocks": 22528, 00:26:29.586 "percent": 34 00:26:29.586 } 00:26:29.586 }, 00:26:29.586 "base_bdevs_list": [ 00:26:29.586 { 00:26:29.586 "name": "spare", 00:26:29.586 "uuid": "7ebaebaf-62fc-58f2-91ce-0d9a6e30d6d9", 00:26:29.586 "is_configured": true, 00:26:29.586 "data_offset": 0, 00:26:29.586 "data_size": 65536 00:26:29.586 }, 00:26:29.586 { 00:26:29.586 "name": "BaseBdev2", 00:26:29.586 "uuid": "22a68b45-3b1c-5c46-9e09-94a48f9b6cc9", 00:26:29.586 "is_configured": true, 00:26:29.586 "data_offset": 0, 00:26:29.586 "data_size": 65536 00:26:29.586 } 00:26:29.586 ] 00:26:29.586 }' 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:29.586 13:46:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:30.961 13:46:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:30.961 13:46:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:30.961 13:46:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:30.961 13:46:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:30.961 13:46:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:30.961 13:46:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:30.961 13:46:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:30.961 13:46:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.962 13:46:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.962 13:46:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:30.962 13:46:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.962 13:46:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:30.962 "name": "raid_bdev1", 00:26:30.962 "uuid": "1b8a1356-5e48-4272-9f19-aff6eca474dc", 00:26:30.962 "strip_size_kb": 0, 00:26:30.962 "state": "online", 00:26:30.962 "raid_level": "raid1", 00:26:30.962 "superblock": false, 00:26:30.962 "num_base_bdevs": 2, 00:26:30.962 "num_base_bdevs_discovered": 2, 00:26:30.962 "num_base_bdevs_operational": 2, 00:26:30.962 "process": { 00:26:30.962 "type": "rebuild", 00:26:30.962 "target": "spare", 00:26:30.962 "progress": { 00:26:30.962 "blocks": 47104, 00:26:30.962 "percent": 71 00:26:30.962 } 00:26:30.962 }, 00:26:30.962 "base_bdevs_list": [ 00:26:30.962 { 00:26:30.962 "name": "spare", 00:26:30.962 "uuid": "7ebaebaf-62fc-58f2-91ce-0d9a6e30d6d9", 00:26:30.962 "is_configured": true, 00:26:30.962 "data_offset": 0, 00:26:30.962 "data_size": 65536 00:26:30.962 }, 00:26:30.962 { 00:26:30.962 "name": "BaseBdev2", 00:26:30.962 "uuid": "22a68b45-3b1c-5c46-9e09-94a48f9b6cc9", 00:26:30.962 "is_configured": true, 00:26:30.962 "data_offset": 0, 00:26:30.962 "data_size": 65536 00:26:30.962 } 00:26:30.962 ] 00:26:30.962 }' 00:26:30.962 13:46:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:30.962 13:46:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:30.962 13:46:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:30.962 13:46:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:30.962 13:46:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:31.528 [2024-11-20 13:46:34.333886] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:31.528 [2024-11-20 13:46:34.334017] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:31.528 [2024-11-20 13:46:34.334092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:31.786 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:31.786 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:31.786 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:31.786 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:31.786 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:31.786 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:31.786 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:31.786 13:46:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.786 13:46:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:31.786 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:31.786 13:46:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.786 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:31.786 "name": "raid_bdev1", 00:26:31.786 "uuid": "1b8a1356-5e48-4272-9f19-aff6eca474dc", 00:26:31.786 "strip_size_kb": 0, 00:26:31.786 "state": "online", 00:26:31.786 "raid_level": "raid1", 00:26:31.786 "superblock": false, 00:26:31.786 "num_base_bdevs": 2, 00:26:31.786 "num_base_bdevs_discovered": 2, 00:26:31.786 "num_base_bdevs_operational": 2, 00:26:31.786 "base_bdevs_list": [ 00:26:31.786 { 00:26:31.786 "name": "spare", 00:26:31.786 "uuid": "7ebaebaf-62fc-58f2-91ce-0d9a6e30d6d9", 00:26:31.786 "is_configured": true, 00:26:31.786 "data_offset": 0, 00:26:31.786 "data_size": 65536 00:26:31.786 }, 00:26:31.786 { 00:26:31.786 "name": "BaseBdev2", 00:26:31.786 "uuid": "22a68b45-3b1c-5c46-9e09-94a48f9b6cc9", 00:26:31.786 "is_configured": true, 00:26:31.786 "data_offset": 0, 00:26:31.786 "data_size": 65536 00:26:31.786 } 00:26:31.786 ] 00:26:31.786 }' 00:26:31.786 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:31.786 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:31.786 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:32.045 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:26:32.045 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:26:32.045 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:32.045 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:32.045 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:32.045 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:32.045 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:32.045 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:32.045 13:46:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.045 13:46:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.045 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.045 13:46:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.045 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:32.045 "name": "raid_bdev1", 00:26:32.045 "uuid": "1b8a1356-5e48-4272-9f19-aff6eca474dc", 00:26:32.045 "strip_size_kb": 0, 00:26:32.045 "state": "online", 00:26:32.045 "raid_level": "raid1", 00:26:32.045 "superblock": false, 00:26:32.045 "num_base_bdevs": 2, 00:26:32.045 "num_base_bdevs_discovered": 2, 00:26:32.045 "num_base_bdevs_operational": 2, 00:26:32.045 "base_bdevs_list": [ 00:26:32.045 { 00:26:32.045 "name": "spare", 00:26:32.045 "uuid": "7ebaebaf-62fc-58f2-91ce-0d9a6e30d6d9", 00:26:32.045 "is_configured": true, 00:26:32.045 "data_offset": 0, 00:26:32.045 "data_size": 65536 00:26:32.045 }, 00:26:32.045 { 00:26:32.045 "name": "BaseBdev2", 00:26:32.045 "uuid": "22a68b45-3b1c-5c46-9e09-94a48f9b6cc9", 00:26:32.045 "is_configured": true, 00:26:32.045 "data_offset": 0, 00:26:32.045 "data_size": 65536 00:26:32.045 } 00:26:32.045 ] 00:26:32.045 }' 00:26:32.045 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:32.045 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:32.045 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:32.045 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:32.045 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:32.045 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:32.045 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:32.045 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:32.046 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:32.046 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:32.046 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:32.046 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:32.046 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:32.046 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:32.046 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:32.046 13:46:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.046 13:46:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.046 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.046 13:46:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.304 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:32.304 "name": "raid_bdev1", 00:26:32.304 "uuid": "1b8a1356-5e48-4272-9f19-aff6eca474dc", 00:26:32.304 "strip_size_kb": 0, 00:26:32.304 "state": "online", 00:26:32.304 "raid_level": "raid1", 00:26:32.304 "superblock": false, 00:26:32.304 "num_base_bdevs": 2, 00:26:32.304 "num_base_bdevs_discovered": 2, 00:26:32.304 "num_base_bdevs_operational": 2, 00:26:32.304 "base_bdevs_list": [ 00:26:32.304 { 00:26:32.304 "name": "spare", 00:26:32.304 "uuid": "7ebaebaf-62fc-58f2-91ce-0d9a6e30d6d9", 00:26:32.304 "is_configured": true, 00:26:32.304 "data_offset": 0, 00:26:32.304 "data_size": 65536 00:26:32.304 }, 00:26:32.304 { 00:26:32.304 "name": "BaseBdev2", 00:26:32.304 "uuid": "22a68b45-3b1c-5c46-9e09-94a48f9b6cc9", 00:26:32.304 "is_configured": true, 00:26:32.304 "data_offset": 0, 00:26:32.304 "data_size": 65536 00:26:32.304 } 00:26:32.304 ] 00:26:32.304 }' 00:26:32.304 13:46:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:32.304 13:46:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.561 13:46:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:32.561 13:46:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.561 13:46:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.561 [2024-11-20 13:46:35.425959] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:32.561 [2024-11-20 13:46:35.426003] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:32.561 [2024-11-20 13:46:35.426113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:32.561 [2024-11-20 13:46:35.426217] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:32.561 [2024-11-20 13:46:35.426248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:32.561 13:46:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.561 13:46:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:26:32.561 13:46:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:32.561 13:46:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.561 13:46:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.561 13:46:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.819 13:46:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:26:32.819 13:46:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:26:32.819 13:46:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:26:32.819 13:46:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:32.819 13:46:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:32.819 13:46:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:32.819 13:46:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:32.819 13:46:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:32.819 13:46:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:32.819 13:46:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:32.819 13:46:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:32.819 13:46:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:32.819 13:46:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:33.076 /dev/nbd0 00:26:33.076 13:46:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:33.076 13:46:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:33.076 13:46:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:33.076 13:46:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:26:33.076 13:46:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:33.076 13:46:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:33.076 13:46:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:33.076 13:46:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:26:33.076 13:46:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:33.076 13:46:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:33.076 13:46:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:33.076 1+0 records in 00:26:33.076 1+0 records out 00:26:33.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335884 s, 12.2 MB/s 00:26:33.076 13:46:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:33.076 13:46:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:26:33.076 13:46:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:33.076 13:46:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:33.076 13:46:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:26:33.076 13:46:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:33.076 13:46:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:33.076 13:46:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:26:33.334 /dev/nbd1 00:26:33.334 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:33.334 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:33.334 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:26:33.334 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:26:33.334 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:33.334 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:33.334 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:26:33.334 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:26:33.334 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:33.334 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:33.334 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:33.334 1+0 records in 00:26:33.334 1+0 records out 00:26:33.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437626 s, 9.4 MB/s 00:26:33.335 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:33.335 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:26:33.335 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:33.335 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:33.335 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:26:33.335 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:33.335 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:33.335 13:46:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:26:33.593 13:46:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:26:33.593 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:33.593 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:33.593 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:33.593 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:33.593 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:33.593 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:26:33.851 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:33.851 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:33.851 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:33.851 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:33.852 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:33.852 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:33.852 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:33.852 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:33.852 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:33.852 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:26:34.110 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:34.110 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:34.110 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:34.110 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:34.110 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:34.110 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:34.110 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:34.110 13:46:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:34.110 13:46:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:26:34.110 13:46:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75693 00:26:34.110 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75693 ']' 00:26:34.110 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75693 00:26:34.110 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:26:34.110 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:34.110 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75693 00:26:34.110 killing process with pid 75693 00:26:34.110 Received shutdown signal, test time was about 60.000000 seconds 00:26:34.110 00:26:34.110 Latency(us) 00:26:34.110 [2024-11-20T13:46:37.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.110 [2024-11-20T13:46:37.027Z] =================================================================================================================== 00:26:34.110 [2024-11-20T13:46:37.027Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:34.110 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:34.110 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:34.110 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75693' 00:26:34.110 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75693 00:26:34.110 [2024-11-20 13:46:36.965151] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:34.110 13:46:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75693 00:26:34.368 [2024-11-20 13:46:37.237626] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:26:35.760 00:26:35.760 real 0m18.835s 00:26:35.760 user 0m21.130s 00:26:35.760 sys 0m3.531s 00:26:35.760 ************************************ 00:26:35.760 END TEST raid_rebuild_test 00:26:35.760 ************************************ 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.760 13:46:38 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:26:35.760 13:46:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:26:35.760 13:46:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:35.760 13:46:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:35.760 ************************************ 00:26:35.760 START TEST raid_rebuild_test_sb 00:26:35.760 ************************************ 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:26:35.760 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:26:35.761 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:26:35.761 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:26:35.761 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:26:35.761 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76141 00:26:35.761 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76141 00:26:35.761 13:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76141 ']' 00:26:35.761 13:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.761 13:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:35.761 13:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:35.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.761 13:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.761 13:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:35.761 13:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:35.761 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:35.761 Zero copy mechanism will not be used. 00:26:35.761 [2024-11-20 13:46:38.442990] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:26:35.761 [2024-11-20 13:46:38.443143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76141 ] 00:26:35.761 [2024-11-20 13:46:38.617091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.018 [2024-11-20 13:46:38.749228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.276 [2024-11-20 13:46:38.952700] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:36.276 [2024-11-20 13:46:38.952784] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:36.841 BaseBdev1_malloc 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:36.841 [2024-11-20 13:46:39.506135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:36.841 [2024-11-20 13:46:39.506635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:36.841 [2024-11-20 13:46:39.506685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:36.841 [2024-11-20 13:46:39.506708] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:36.841 [2024-11-20 13:46:39.509803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:36.841 [2024-11-20 13:46:39.509857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:36.841 BaseBdev1 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:36.841 BaseBdev2_malloc 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:36.841 [2024-11-20 13:46:39.558829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:36.841 [2024-11-20 13:46:39.558943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:36.841 [2024-11-20 13:46:39.558982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:36.841 [2024-11-20 13:46:39.559001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:36.841 [2024-11-20 13:46:39.561920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:36.841 [2024-11-20 13:46:39.561973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:36.841 BaseBdev2 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:36.841 spare_malloc 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:36.841 spare_delay 00:26:36.841 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:36.842 [2024-11-20 13:46:39.633876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:36.842 [2024-11-20 13:46:39.633972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:36.842 [2024-11-20 13:46:39.634006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:26:36.842 [2024-11-20 13:46:39.634024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:36.842 [2024-11-20 13:46:39.636872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:36.842 [2024-11-20 13:46:39.636946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:36.842 spare 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:36.842 [2024-11-20 13:46:39.641963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:36.842 [2024-11-20 13:46:39.644526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:36.842 [2024-11-20 13:46:39.644769] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:36.842 [2024-11-20 13:46:39.644794] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:36.842 [2024-11-20 13:46:39.645149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:36.842 [2024-11-20 13:46:39.645378] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:36.842 [2024-11-20 13:46:39.645402] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:36.842 [2024-11-20 13:46:39.645614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:36.842 "name": "raid_bdev1", 00:26:36.842 "uuid": "9dfa28ef-050f-4e61-a376-79adf8db4d8a", 00:26:36.842 "strip_size_kb": 0, 00:26:36.842 "state": "online", 00:26:36.842 "raid_level": "raid1", 00:26:36.842 "superblock": true, 00:26:36.842 "num_base_bdevs": 2, 00:26:36.842 "num_base_bdevs_discovered": 2, 00:26:36.842 "num_base_bdevs_operational": 2, 00:26:36.842 "base_bdevs_list": [ 00:26:36.842 { 00:26:36.842 "name": "BaseBdev1", 00:26:36.842 "uuid": "1f9af34b-2f31-543c-a099-3c79f12dd4db", 00:26:36.842 "is_configured": true, 00:26:36.842 "data_offset": 2048, 00:26:36.842 "data_size": 63488 00:26:36.842 }, 00:26:36.842 { 00:26:36.842 "name": "BaseBdev2", 00:26:36.842 "uuid": "2422c89f-4f48-552d-ae78-e83502fbab7a", 00:26:36.842 "is_configured": true, 00:26:36.842 "data_offset": 2048, 00:26:36.842 "data_size": 63488 00:26:36.842 } 00:26:36.842 ] 00:26:36.842 }' 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:36.842 13:46:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.408 [2024-11-20 13:46:40.194494] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:37.408 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:37.975 [2024-11-20 13:46:40.602305] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:37.975 /dev/nbd0 00:26:37.975 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:37.975 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:37.975 13:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:37.975 13:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:26:37.975 13:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:37.975 13:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:37.975 13:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:37.975 13:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:26:37.975 13:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:37.975 13:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:37.975 13:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:37.975 1+0 records in 00:26:37.975 1+0 records out 00:26:37.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00418201 s, 979 kB/s 00:26:37.975 13:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:37.975 13:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:26:37.975 13:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:37.975 13:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:37.975 13:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:26:37.975 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:37.975 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:37.975 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:26:37.975 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:26:37.975 13:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:26:44.661 63488+0 records in 00:26:44.661 63488+0 records out 00:26:44.661 32505856 bytes (33 MB, 31 MiB) copied, 6.39756 s, 5.1 MB/s 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:26:44.661 [2024-11-20 13:46:47.315732] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:44.661 [2024-11-20 13:46:47.348963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:44.661 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:44.662 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:44.662 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:44.662 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:44.662 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:44.662 13:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.662 13:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:44.662 13:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.662 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:44.662 "name": "raid_bdev1", 00:26:44.662 "uuid": "9dfa28ef-050f-4e61-a376-79adf8db4d8a", 00:26:44.662 "strip_size_kb": 0, 00:26:44.662 "state": "online", 00:26:44.662 "raid_level": "raid1", 00:26:44.662 "superblock": true, 00:26:44.662 "num_base_bdevs": 2, 00:26:44.662 "num_base_bdevs_discovered": 1, 00:26:44.662 "num_base_bdevs_operational": 1, 00:26:44.662 "base_bdevs_list": [ 00:26:44.662 { 00:26:44.662 "name": null, 00:26:44.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:44.662 "is_configured": false, 00:26:44.662 "data_offset": 0, 00:26:44.662 "data_size": 63488 00:26:44.662 }, 00:26:44.662 { 00:26:44.662 "name": "BaseBdev2", 00:26:44.662 "uuid": "2422c89f-4f48-552d-ae78-e83502fbab7a", 00:26:44.662 "is_configured": true, 00:26:44.662 "data_offset": 2048, 00:26:44.662 "data_size": 63488 00:26:44.662 } 00:26:44.662 ] 00:26:44.662 }' 00:26:44.662 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:44.662 13:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:44.920 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:44.920 13:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.920 13:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:45.178 [2024-11-20 13:46:47.833108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:45.178 [2024-11-20 13:46:47.849889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:26:45.178 13:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.178 13:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:26:45.178 [2024-11-20 13:46:47.852960] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:46.113 13:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:46.113 13:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:46.113 13:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:46.113 13:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:46.113 13:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:46.113 13:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:46.113 13:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.113 13:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.113 13:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.113 13:46:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.113 13:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:46.113 "name": "raid_bdev1", 00:26:46.113 "uuid": "9dfa28ef-050f-4e61-a376-79adf8db4d8a", 00:26:46.113 "strip_size_kb": 0, 00:26:46.113 "state": "online", 00:26:46.113 "raid_level": "raid1", 00:26:46.113 "superblock": true, 00:26:46.113 "num_base_bdevs": 2, 00:26:46.113 "num_base_bdevs_discovered": 2, 00:26:46.113 "num_base_bdevs_operational": 2, 00:26:46.113 "process": { 00:26:46.113 "type": "rebuild", 00:26:46.113 "target": "spare", 00:26:46.113 "progress": { 00:26:46.113 "blocks": 20480, 00:26:46.113 "percent": 32 00:26:46.113 } 00:26:46.113 }, 00:26:46.113 "base_bdevs_list": [ 00:26:46.113 { 00:26:46.113 "name": "spare", 00:26:46.113 "uuid": "f81fef21-998e-5c9c-a6b9-f30e6531e9fe", 00:26:46.113 "is_configured": true, 00:26:46.113 "data_offset": 2048, 00:26:46.113 "data_size": 63488 00:26:46.113 }, 00:26:46.113 { 00:26:46.113 "name": "BaseBdev2", 00:26:46.113 "uuid": "2422c89f-4f48-552d-ae78-e83502fbab7a", 00:26:46.113 "is_configured": true, 00:26:46.113 "data_offset": 2048, 00:26:46.113 "data_size": 63488 00:26:46.113 } 00:26:46.113 ] 00:26:46.113 }' 00:26:46.113 13:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:46.113 13:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:46.113 13:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:46.113 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:46.113 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:26:46.113 13:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.113 13:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.113 [2024-11-20 13:46:49.026442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:46.372 [2024-11-20 13:46:49.062485] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:46.372 [2024-11-20 13:46:49.062614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:46.372 [2024-11-20 13:46:49.062640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:46.372 [2024-11-20 13:46:49.062660] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:46.372 13:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.372 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:46.372 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:46.372 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:46.372 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:46.372 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:46.372 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:46.372 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:46.372 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:46.372 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:46.372 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:46.372 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.372 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:46.372 13:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.372 13:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.372 13:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.372 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:46.372 "name": "raid_bdev1", 00:26:46.372 "uuid": "9dfa28ef-050f-4e61-a376-79adf8db4d8a", 00:26:46.372 "strip_size_kb": 0, 00:26:46.372 "state": "online", 00:26:46.372 "raid_level": "raid1", 00:26:46.372 "superblock": true, 00:26:46.372 "num_base_bdevs": 2, 00:26:46.372 "num_base_bdevs_discovered": 1, 00:26:46.372 "num_base_bdevs_operational": 1, 00:26:46.372 "base_bdevs_list": [ 00:26:46.372 { 00:26:46.372 "name": null, 00:26:46.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:46.372 "is_configured": false, 00:26:46.372 "data_offset": 0, 00:26:46.372 "data_size": 63488 00:26:46.372 }, 00:26:46.372 { 00:26:46.372 "name": "BaseBdev2", 00:26:46.372 "uuid": "2422c89f-4f48-552d-ae78-e83502fbab7a", 00:26:46.372 "is_configured": true, 00:26:46.372 "data_offset": 2048, 00:26:46.372 "data_size": 63488 00:26:46.372 } 00:26:46.372 ] 00:26:46.372 }' 00:26:46.372 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:46.372 13:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.939 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:46.939 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:46.939 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:46.939 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:46.939 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:46.939 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:46.939 13:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.939 13:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.939 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.939 13:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.939 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:46.939 "name": "raid_bdev1", 00:26:46.939 "uuid": "9dfa28ef-050f-4e61-a376-79adf8db4d8a", 00:26:46.939 "strip_size_kb": 0, 00:26:46.939 "state": "online", 00:26:46.939 "raid_level": "raid1", 00:26:46.939 "superblock": true, 00:26:46.939 "num_base_bdevs": 2, 00:26:46.939 "num_base_bdevs_discovered": 1, 00:26:46.939 "num_base_bdevs_operational": 1, 00:26:46.939 "base_bdevs_list": [ 00:26:46.939 { 00:26:46.939 "name": null, 00:26:46.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:46.939 "is_configured": false, 00:26:46.939 "data_offset": 0, 00:26:46.939 "data_size": 63488 00:26:46.939 }, 00:26:46.939 { 00:26:46.939 "name": "BaseBdev2", 00:26:46.939 "uuid": "2422c89f-4f48-552d-ae78-e83502fbab7a", 00:26:46.939 "is_configured": true, 00:26:46.939 "data_offset": 2048, 00:26:46.939 "data_size": 63488 00:26:46.939 } 00:26:46.939 ] 00:26:46.939 }' 00:26:46.939 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:46.939 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:46.939 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:46.939 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:46.939 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:46.939 13:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.939 13:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:46.939 [2024-11-20 13:46:49.782986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:46.939 [2024-11-20 13:46:49.799469] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:26:46.939 13:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.939 13:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:26:46.939 [2024-11-20 13:46:49.802431] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:47.958 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:47.958 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:47.958 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:47.958 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:47.958 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:47.958 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:47.958 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:47.958 13:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.958 13:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:47.958 13:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.958 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:47.958 "name": "raid_bdev1", 00:26:47.958 "uuid": "9dfa28ef-050f-4e61-a376-79adf8db4d8a", 00:26:47.958 "strip_size_kb": 0, 00:26:47.958 "state": "online", 00:26:47.958 "raid_level": "raid1", 00:26:47.958 "superblock": true, 00:26:47.958 "num_base_bdevs": 2, 00:26:47.958 "num_base_bdevs_discovered": 2, 00:26:47.958 "num_base_bdevs_operational": 2, 00:26:47.958 "process": { 00:26:47.958 "type": "rebuild", 00:26:47.958 "target": "spare", 00:26:47.958 "progress": { 00:26:47.958 "blocks": 20480, 00:26:47.958 "percent": 32 00:26:47.958 } 00:26:47.958 }, 00:26:47.958 "base_bdevs_list": [ 00:26:47.958 { 00:26:47.958 "name": "spare", 00:26:47.958 "uuid": "f81fef21-998e-5c9c-a6b9-f30e6531e9fe", 00:26:47.958 "is_configured": true, 00:26:47.958 "data_offset": 2048, 00:26:47.958 "data_size": 63488 00:26:47.958 }, 00:26:47.958 { 00:26:47.958 "name": "BaseBdev2", 00:26:47.958 "uuid": "2422c89f-4f48-552d-ae78-e83502fbab7a", 00:26:47.958 "is_configured": true, 00:26:47.958 "data_offset": 2048, 00:26:47.958 "data_size": 63488 00:26:47.958 } 00:26:47.958 ] 00:26:47.958 }' 00:26:47.958 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:48.217 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:48.217 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:48.217 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:48.217 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:26:48.217 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:26:48.217 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:26:48.217 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:26:48.217 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:26:48.217 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:26:48.217 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=421 00:26:48.217 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:48.217 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:48.217 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:48.217 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:48.217 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:48.217 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:48.217 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:48.217 13:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.217 13:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:48.217 13:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:48.217 13:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.217 13:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:48.217 "name": "raid_bdev1", 00:26:48.217 "uuid": "9dfa28ef-050f-4e61-a376-79adf8db4d8a", 00:26:48.217 "strip_size_kb": 0, 00:26:48.217 "state": "online", 00:26:48.217 "raid_level": "raid1", 00:26:48.217 "superblock": true, 00:26:48.217 "num_base_bdevs": 2, 00:26:48.217 "num_base_bdevs_discovered": 2, 00:26:48.217 "num_base_bdevs_operational": 2, 00:26:48.217 "process": { 00:26:48.217 "type": "rebuild", 00:26:48.217 "target": "spare", 00:26:48.217 "progress": { 00:26:48.217 "blocks": 22528, 00:26:48.217 "percent": 35 00:26:48.217 } 00:26:48.217 }, 00:26:48.217 "base_bdevs_list": [ 00:26:48.217 { 00:26:48.217 "name": "spare", 00:26:48.217 "uuid": "f81fef21-998e-5c9c-a6b9-f30e6531e9fe", 00:26:48.217 "is_configured": true, 00:26:48.217 "data_offset": 2048, 00:26:48.217 "data_size": 63488 00:26:48.217 }, 00:26:48.217 { 00:26:48.217 "name": "BaseBdev2", 00:26:48.217 "uuid": "2422c89f-4f48-552d-ae78-e83502fbab7a", 00:26:48.217 "is_configured": true, 00:26:48.217 "data_offset": 2048, 00:26:48.217 "data_size": 63488 00:26:48.217 } 00:26:48.217 ] 00:26:48.217 }' 00:26:48.217 13:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:48.217 13:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:48.217 13:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:48.476 13:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:48.476 13:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:49.410 13:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:49.410 13:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:49.410 13:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:49.410 13:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:49.410 13:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:49.410 13:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:49.410 13:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.410 13:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:49.410 13:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.410 13:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:49.410 13:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.410 13:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:49.410 "name": "raid_bdev1", 00:26:49.410 "uuid": "9dfa28ef-050f-4e61-a376-79adf8db4d8a", 00:26:49.410 "strip_size_kb": 0, 00:26:49.410 "state": "online", 00:26:49.410 "raid_level": "raid1", 00:26:49.410 "superblock": true, 00:26:49.410 "num_base_bdevs": 2, 00:26:49.410 "num_base_bdevs_discovered": 2, 00:26:49.410 "num_base_bdevs_operational": 2, 00:26:49.410 "process": { 00:26:49.410 "type": "rebuild", 00:26:49.410 "target": "spare", 00:26:49.410 "progress": { 00:26:49.410 "blocks": 47104, 00:26:49.410 "percent": 74 00:26:49.410 } 00:26:49.410 }, 00:26:49.410 "base_bdevs_list": [ 00:26:49.410 { 00:26:49.410 "name": "spare", 00:26:49.410 "uuid": "f81fef21-998e-5c9c-a6b9-f30e6531e9fe", 00:26:49.410 "is_configured": true, 00:26:49.410 "data_offset": 2048, 00:26:49.410 "data_size": 63488 00:26:49.410 }, 00:26:49.410 { 00:26:49.410 "name": "BaseBdev2", 00:26:49.410 "uuid": "2422c89f-4f48-552d-ae78-e83502fbab7a", 00:26:49.411 "is_configured": true, 00:26:49.411 "data_offset": 2048, 00:26:49.411 "data_size": 63488 00:26:49.411 } 00:26:49.411 ] 00:26:49.411 }' 00:26:49.411 13:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:49.411 13:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:49.411 13:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:49.411 13:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:49.411 13:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:26:50.346 [2024-11-20 13:46:52.926402] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:50.346 [2024-11-20 13:46:52.926513] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:50.346 [2024-11-20 13:46:52.926684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:50.605 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:26:50.605 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:50.605 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:50.605 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:50.605 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:50.605 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:50.605 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.605 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.605 13:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.605 13:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.605 13:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.605 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:50.605 "name": "raid_bdev1", 00:26:50.605 "uuid": "9dfa28ef-050f-4e61-a376-79adf8db4d8a", 00:26:50.605 "strip_size_kb": 0, 00:26:50.605 "state": "online", 00:26:50.605 "raid_level": "raid1", 00:26:50.605 "superblock": true, 00:26:50.605 "num_base_bdevs": 2, 00:26:50.605 "num_base_bdevs_discovered": 2, 00:26:50.605 "num_base_bdevs_operational": 2, 00:26:50.605 "base_bdevs_list": [ 00:26:50.605 { 00:26:50.605 "name": "spare", 00:26:50.605 "uuid": "f81fef21-998e-5c9c-a6b9-f30e6531e9fe", 00:26:50.605 "is_configured": true, 00:26:50.605 "data_offset": 2048, 00:26:50.605 "data_size": 63488 00:26:50.605 }, 00:26:50.605 { 00:26:50.605 "name": "BaseBdev2", 00:26:50.605 "uuid": "2422c89f-4f48-552d-ae78-e83502fbab7a", 00:26:50.605 "is_configured": true, 00:26:50.605 "data_offset": 2048, 00:26:50.605 "data_size": 63488 00:26:50.605 } 00:26:50.605 ] 00:26:50.606 }' 00:26:50.606 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:50.606 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:50.606 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:50.606 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:26:50.606 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:26:50.606 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:50.606 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:50.606 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:50.606 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:50.606 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:50.606 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.606 13:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.606 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.606 13:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.606 13:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.865 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:50.865 "name": "raid_bdev1", 00:26:50.865 "uuid": "9dfa28ef-050f-4e61-a376-79adf8db4d8a", 00:26:50.865 "strip_size_kb": 0, 00:26:50.865 "state": "online", 00:26:50.865 "raid_level": "raid1", 00:26:50.865 "superblock": true, 00:26:50.865 "num_base_bdevs": 2, 00:26:50.865 "num_base_bdevs_discovered": 2, 00:26:50.865 "num_base_bdevs_operational": 2, 00:26:50.865 "base_bdevs_list": [ 00:26:50.865 { 00:26:50.865 "name": "spare", 00:26:50.865 "uuid": "f81fef21-998e-5c9c-a6b9-f30e6531e9fe", 00:26:50.865 "is_configured": true, 00:26:50.865 "data_offset": 2048, 00:26:50.865 "data_size": 63488 00:26:50.865 }, 00:26:50.865 { 00:26:50.865 "name": "BaseBdev2", 00:26:50.865 "uuid": "2422c89f-4f48-552d-ae78-e83502fbab7a", 00:26:50.865 "is_configured": true, 00:26:50.865 "data_offset": 2048, 00:26:50.865 "data_size": 63488 00:26:50.865 } 00:26:50.865 ] 00:26:50.865 }' 00:26:50.865 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:50.865 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:50.865 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:50.865 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:50.865 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:50.866 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:50.866 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:50.866 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:50.866 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:50.866 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:50.866 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:50.866 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:50.866 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:50.866 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:50.866 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.866 13:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.866 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.866 13:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:50.866 13:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.866 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:50.866 "name": "raid_bdev1", 00:26:50.866 "uuid": "9dfa28ef-050f-4e61-a376-79adf8db4d8a", 00:26:50.866 "strip_size_kb": 0, 00:26:50.866 "state": "online", 00:26:50.866 "raid_level": "raid1", 00:26:50.866 "superblock": true, 00:26:50.866 "num_base_bdevs": 2, 00:26:50.866 "num_base_bdevs_discovered": 2, 00:26:50.866 "num_base_bdevs_operational": 2, 00:26:50.866 "base_bdevs_list": [ 00:26:50.866 { 00:26:50.866 "name": "spare", 00:26:50.866 "uuid": "f81fef21-998e-5c9c-a6b9-f30e6531e9fe", 00:26:50.866 "is_configured": true, 00:26:50.866 "data_offset": 2048, 00:26:50.866 "data_size": 63488 00:26:50.866 }, 00:26:50.866 { 00:26:50.866 "name": "BaseBdev2", 00:26:50.866 "uuid": "2422c89f-4f48-552d-ae78-e83502fbab7a", 00:26:50.866 "is_configured": true, 00:26:50.866 "data_offset": 2048, 00:26:50.866 "data_size": 63488 00:26:50.866 } 00:26:50.866 ] 00:26:50.866 }' 00:26:50.866 13:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:50.866 13:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.433 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:51.433 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.433 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.433 [2024-11-20 13:46:54.191018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:51.433 [2024-11-20 13:46:54.191064] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:51.433 [2024-11-20 13:46:54.191171] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:51.433 [2024-11-20 13:46:54.191274] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:51.433 [2024-11-20 13:46:54.191292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:51.433 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.433 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:51.433 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.433 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:51.433 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:26:51.433 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.433 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:26:51.433 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:26:51.433 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:26:51.433 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:51.433 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:51.433 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:51.433 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:51.433 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:51.433 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:51.433 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:26:51.433 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:51.433 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:51.433 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:51.692 /dev/nbd0 00:26:51.692 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:51.692 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:51.692 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:51.692 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:26:51.692 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:51.692 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:51.692 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:51.951 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:26:51.951 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:51.951 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:51.951 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:51.951 1+0 records in 00:26:51.951 1+0 records out 00:26:51.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339066 s, 12.1 MB/s 00:26:51.951 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:51.951 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:26:51.951 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:51.951 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:51.951 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:26:51.951 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:51.951 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:51.951 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:26:52.210 /dev/nbd1 00:26:52.210 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:52.210 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:52.210 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:26:52.210 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:26:52.210 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:52.210 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:52.210 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:26:52.210 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:26:52.210 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:52.210 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:52.210 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:52.210 1+0 records in 00:26:52.210 1+0 records out 00:26:52.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455754 s, 9.0 MB/s 00:26:52.210 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:52.210 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:26:52.210 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:52.210 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:52.210 13:46:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:26:52.210 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:52.210 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:52.210 13:46:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:52.210 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:26:52.210 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:52.210 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:52.210 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:52.210 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:26:52.210 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:52.210 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:26:52.469 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:52.469 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:52.469 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:52.469 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:52.469 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:52.469 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:52.728 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:52.728 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:52.728 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:52.728 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.988 [2024-11-20 13:46:55.696151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:52.988 [2024-11-20 13:46:55.696229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:52.988 [2024-11-20 13:46:55.696267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:52.988 [2024-11-20 13:46:55.696283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:52.988 [2024-11-20 13:46:55.699290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:52.988 [2024-11-20 13:46:55.699339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:52.988 [2024-11-20 13:46:55.699491] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:52.988 [2024-11-20 13:46:55.699560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:52.988 [2024-11-20 13:46:55.699745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:52.988 spare 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.988 [2024-11-20 13:46:55.799915] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:26:52.988 [2024-11-20 13:46:55.800003] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:52.988 [2024-11-20 13:46:55.800431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:26:52.988 [2024-11-20 13:46:55.800716] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:26:52.988 [2024-11-20 13:46:55.800750] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:26:52.988 [2024-11-20 13:46:55.801025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:52.988 "name": "raid_bdev1", 00:26:52.988 "uuid": "9dfa28ef-050f-4e61-a376-79adf8db4d8a", 00:26:52.988 "strip_size_kb": 0, 00:26:52.988 "state": "online", 00:26:52.988 "raid_level": "raid1", 00:26:52.988 "superblock": true, 00:26:52.988 "num_base_bdevs": 2, 00:26:52.988 "num_base_bdevs_discovered": 2, 00:26:52.988 "num_base_bdevs_operational": 2, 00:26:52.988 "base_bdevs_list": [ 00:26:52.988 { 00:26:52.988 "name": "spare", 00:26:52.988 "uuid": "f81fef21-998e-5c9c-a6b9-f30e6531e9fe", 00:26:52.988 "is_configured": true, 00:26:52.988 "data_offset": 2048, 00:26:52.988 "data_size": 63488 00:26:52.988 }, 00:26:52.988 { 00:26:52.988 "name": "BaseBdev2", 00:26:52.988 "uuid": "2422c89f-4f48-552d-ae78-e83502fbab7a", 00:26:52.988 "is_configured": true, 00:26:52.988 "data_offset": 2048, 00:26:52.988 "data_size": 63488 00:26:52.988 } 00:26:52.988 ] 00:26:52.988 }' 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:52.988 13:46:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.555 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:53.555 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:53.555 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:53.555 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:53.555 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:53.555 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:53.555 13:46:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.555 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:53.555 13:46:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.555 13:46:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.555 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:53.555 "name": "raid_bdev1", 00:26:53.555 "uuid": "9dfa28ef-050f-4e61-a376-79adf8db4d8a", 00:26:53.555 "strip_size_kb": 0, 00:26:53.555 "state": "online", 00:26:53.555 "raid_level": "raid1", 00:26:53.555 "superblock": true, 00:26:53.555 "num_base_bdevs": 2, 00:26:53.555 "num_base_bdevs_discovered": 2, 00:26:53.555 "num_base_bdevs_operational": 2, 00:26:53.555 "base_bdevs_list": [ 00:26:53.555 { 00:26:53.555 "name": "spare", 00:26:53.555 "uuid": "f81fef21-998e-5c9c-a6b9-f30e6531e9fe", 00:26:53.555 "is_configured": true, 00:26:53.555 "data_offset": 2048, 00:26:53.555 "data_size": 63488 00:26:53.555 }, 00:26:53.555 { 00:26:53.555 "name": "BaseBdev2", 00:26:53.556 "uuid": "2422c89f-4f48-552d-ae78-e83502fbab7a", 00:26:53.556 "is_configured": true, 00:26:53.556 "data_offset": 2048, 00:26:53.556 "data_size": 63488 00:26:53.556 } 00:26:53.556 ] 00:26:53.556 }' 00:26:53.556 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:53.556 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:53.556 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.814 [2024-11-20 13:46:56.573180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:53.814 "name": "raid_bdev1", 00:26:53.814 "uuid": "9dfa28ef-050f-4e61-a376-79adf8db4d8a", 00:26:53.814 "strip_size_kb": 0, 00:26:53.814 "state": "online", 00:26:53.814 "raid_level": "raid1", 00:26:53.814 "superblock": true, 00:26:53.814 "num_base_bdevs": 2, 00:26:53.814 "num_base_bdevs_discovered": 1, 00:26:53.814 "num_base_bdevs_operational": 1, 00:26:53.814 "base_bdevs_list": [ 00:26:53.814 { 00:26:53.814 "name": null, 00:26:53.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:53.814 "is_configured": false, 00:26:53.814 "data_offset": 0, 00:26:53.814 "data_size": 63488 00:26:53.814 }, 00:26:53.814 { 00:26:53.814 "name": "BaseBdev2", 00:26:53.814 "uuid": "2422c89f-4f48-552d-ae78-e83502fbab7a", 00:26:53.814 "is_configured": true, 00:26:53.814 "data_offset": 2048, 00:26:53.814 "data_size": 63488 00:26:53.814 } 00:26:53.814 ] 00:26:53.814 }' 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:53.814 13:46:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.380 13:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:26:54.380 13:46:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.380 13:46:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:54.380 [2024-11-20 13:46:57.085352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:54.380 [2024-11-20 13:46:57.085621] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:54.380 [2024-11-20 13:46:57.085660] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:54.380 [2024-11-20 13:46:57.085711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:54.380 [2024-11-20 13:46:57.101150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:26:54.380 13:46:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.380 13:46:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:26:54.380 [2024-11-20 13:46:57.103728] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:55.315 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:55.315 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:55.315 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:55.315 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:55.315 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:55.315 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:55.315 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:55.315 13:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.315 13:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.315 13:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.315 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:55.315 "name": "raid_bdev1", 00:26:55.315 "uuid": "9dfa28ef-050f-4e61-a376-79adf8db4d8a", 00:26:55.315 "strip_size_kb": 0, 00:26:55.315 "state": "online", 00:26:55.315 "raid_level": "raid1", 00:26:55.315 "superblock": true, 00:26:55.315 "num_base_bdevs": 2, 00:26:55.315 "num_base_bdevs_discovered": 2, 00:26:55.315 "num_base_bdevs_operational": 2, 00:26:55.315 "process": { 00:26:55.315 "type": "rebuild", 00:26:55.315 "target": "spare", 00:26:55.315 "progress": { 00:26:55.315 "blocks": 20480, 00:26:55.315 "percent": 32 00:26:55.315 } 00:26:55.315 }, 00:26:55.315 "base_bdevs_list": [ 00:26:55.315 { 00:26:55.315 "name": "spare", 00:26:55.315 "uuid": "f81fef21-998e-5c9c-a6b9-f30e6531e9fe", 00:26:55.315 "is_configured": true, 00:26:55.315 "data_offset": 2048, 00:26:55.315 "data_size": 63488 00:26:55.315 }, 00:26:55.315 { 00:26:55.315 "name": "BaseBdev2", 00:26:55.315 "uuid": "2422c89f-4f48-552d-ae78-e83502fbab7a", 00:26:55.315 "is_configured": true, 00:26:55.315 "data_offset": 2048, 00:26:55.315 "data_size": 63488 00:26:55.315 } 00:26:55.315 ] 00:26:55.315 }' 00:26:55.315 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:55.315 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:55.315 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:55.573 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:55.573 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:26:55.573 13:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.573 13:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.573 [2024-11-20 13:46:58.268946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:55.573 [2024-11-20 13:46:58.312824] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:55.573 [2024-11-20 13:46:58.312929] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:55.573 [2024-11-20 13:46:58.312954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:55.573 [2024-11-20 13:46:58.312969] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:55.573 13:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.573 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:55.573 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:55.573 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:55.573 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:55.573 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:55.573 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:55.573 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:55.573 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:55.573 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:55.573 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:55.573 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:55.573 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:55.573 13:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.573 13:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.573 13:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.573 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:55.573 "name": "raid_bdev1", 00:26:55.573 "uuid": "9dfa28ef-050f-4e61-a376-79adf8db4d8a", 00:26:55.573 "strip_size_kb": 0, 00:26:55.573 "state": "online", 00:26:55.573 "raid_level": "raid1", 00:26:55.573 "superblock": true, 00:26:55.573 "num_base_bdevs": 2, 00:26:55.573 "num_base_bdevs_discovered": 1, 00:26:55.573 "num_base_bdevs_operational": 1, 00:26:55.573 "base_bdevs_list": [ 00:26:55.573 { 00:26:55.573 "name": null, 00:26:55.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:55.573 "is_configured": false, 00:26:55.573 "data_offset": 0, 00:26:55.573 "data_size": 63488 00:26:55.573 }, 00:26:55.573 { 00:26:55.573 "name": "BaseBdev2", 00:26:55.573 "uuid": "2422c89f-4f48-552d-ae78-e83502fbab7a", 00:26:55.573 "is_configured": true, 00:26:55.574 "data_offset": 2048, 00:26:55.574 "data_size": 63488 00:26:55.574 } 00:26:55.574 ] 00:26:55.574 }' 00:26:55.574 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:55.574 13:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.140 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:26:56.140 13:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.140 13:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:56.140 [2024-11-20 13:46:58.860515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:56.140 [2024-11-20 13:46:58.860604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:56.140 [2024-11-20 13:46:58.860638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:26:56.140 [2024-11-20 13:46:58.860657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:56.140 [2024-11-20 13:46:58.861272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:56.140 [2024-11-20 13:46:58.861322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:56.140 [2024-11-20 13:46:58.861443] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:56.140 [2024-11-20 13:46:58.861468] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:56.140 [2024-11-20 13:46:58.861481] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:56.140 [2024-11-20 13:46:58.861524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:56.140 [2024-11-20 13:46:58.876786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:26:56.140 spare 00:26:56.140 13:46:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.140 13:46:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:26:56.140 [2024-11-20 13:46:58.879269] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:57.074 13:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:57.074 13:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:57.074 13:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:26:57.074 13:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:26:57.074 13:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:57.074 13:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:57.074 13:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:57.074 13:46:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.074 13:46:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.074 13:46:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.074 13:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:57.074 "name": "raid_bdev1", 00:26:57.074 "uuid": "9dfa28ef-050f-4e61-a376-79adf8db4d8a", 00:26:57.074 "strip_size_kb": 0, 00:26:57.074 "state": "online", 00:26:57.074 "raid_level": "raid1", 00:26:57.074 "superblock": true, 00:26:57.074 "num_base_bdevs": 2, 00:26:57.074 "num_base_bdevs_discovered": 2, 00:26:57.074 "num_base_bdevs_operational": 2, 00:26:57.074 "process": { 00:26:57.074 "type": "rebuild", 00:26:57.074 "target": "spare", 00:26:57.074 "progress": { 00:26:57.074 "blocks": 20480, 00:26:57.074 "percent": 32 00:26:57.074 } 00:26:57.074 }, 00:26:57.074 "base_bdevs_list": [ 00:26:57.074 { 00:26:57.074 "name": "spare", 00:26:57.074 "uuid": "f81fef21-998e-5c9c-a6b9-f30e6531e9fe", 00:26:57.074 "is_configured": true, 00:26:57.074 "data_offset": 2048, 00:26:57.074 "data_size": 63488 00:26:57.074 }, 00:26:57.074 { 00:26:57.074 "name": "BaseBdev2", 00:26:57.074 "uuid": "2422c89f-4f48-552d-ae78-e83502fbab7a", 00:26:57.074 "is_configured": true, 00:26:57.074 "data_offset": 2048, 00:26:57.074 "data_size": 63488 00:26:57.074 } 00:26:57.074 ] 00:26:57.074 }' 00:26:57.074 13:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:57.074 13:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:57.332 13:46:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:57.332 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:26:57.332 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:26:57.332 13:47:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.332 13:47:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.332 [2024-11-20 13:47:00.044443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:57.332 [2024-11-20 13:47:00.088311] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:57.332 [2024-11-20 13:47:00.088602] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:57.332 [2024-11-20 13:47:00.088812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:57.332 [2024-11-20 13:47:00.088965] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:57.332 13:47:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.332 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:57.332 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:57.332 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:57.332 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:57.332 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:57.332 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:57.332 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:57.332 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:57.332 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:57.332 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:57.332 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:57.332 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:57.332 13:47:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.332 13:47:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.332 13:47:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.332 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:57.332 "name": "raid_bdev1", 00:26:57.332 "uuid": "9dfa28ef-050f-4e61-a376-79adf8db4d8a", 00:26:57.332 "strip_size_kb": 0, 00:26:57.332 "state": "online", 00:26:57.332 "raid_level": "raid1", 00:26:57.332 "superblock": true, 00:26:57.332 "num_base_bdevs": 2, 00:26:57.332 "num_base_bdevs_discovered": 1, 00:26:57.332 "num_base_bdevs_operational": 1, 00:26:57.332 "base_bdevs_list": [ 00:26:57.332 { 00:26:57.332 "name": null, 00:26:57.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:57.332 "is_configured": false, 00:26:57.332 "data_offset": 0, 00:26:57.332 "data_size": 63488 00:26:57.332 }, 00:26:57.332 { 00:26:57.332 "name": "BaseBdev2", 00:26:57.332 "uuid": "2422c89f-4f48-552d-ae78-e83502fbab7a", 00:26:57.332 "is_configured": true, 00:26:57.332 "data_offset": 2048, 00:26:57.332 "data_size": 63488 00:26:57.332 } 00:26:57.332 ] 00:26:57.332 }' 00:26:57.332 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:57.332 13:47:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:57.900 "name": "raid_bdev1", 00:26:57.900 "uuid": "9dfa28ef-050f-4e61-a376-79adf8db4d8a", 00:26:57.900 "strip_size_kb": 0, 00:26:57.900 "state": "online", 00:26:57.900 "raid_level": "raid1", 00:26:57.900 "superblock": true, 00:26:57.900 "num_base_bdevs": 2, 00:26:57.900 "num_base_bdevs_discovered": 1, 00:26:57.900 "num_base_bdevs_operational": 1, 00:26:57.900 "base_bdevs_list": [ 00:26:57.900 { 00:26:57.900 "name": null, 00:26:57.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:57.900 "is_configured": false, 00:26:57.900 "data_offset": 0, 00:26:57.900 "data_size": 63488 00:26:57.900 }, 00:26:57.900 { 00:26:57.900 "name": "BaseBdev2", 00:26:57.900 "uuid": "2422c89f-4f48-552d-ae78-e83502fbab7a", 00:26:57.900 "is_configured": true, 00:26:57.900 "data_offset": 2048, 00:26:57.900 "data_size": 63488 00:26:57.900 } 00:26:57.900 ] 00:26:57.900 }' 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:57.900 [2024-11-20 13:47:00.804778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:57.900 [2024-11-20 13:47:00.805011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:57.900 [2024-11-20 13:47:00.805098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:26:57.900 [2024-11-20 13:47:00.805343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:57.900 [2024-11-20 13:47:00.806003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:57.900 [2024-11-20 13:47:00.806155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:57.900 [2024-11-20 13:47:00.806284] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:57.900 [2024-11-20 13:47:00.806308] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:57.900 [2024-11-20 13:47:00.806331] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:57.900 [2024-11-20 13:47:00.806345] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:26:57.900 BaseBdev1 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.900 13:47:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:26:59.274 13:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:59.274 13:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:59.274 13:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:59.274 13:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:59.274 13:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:59.274 13:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:59.274 13:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:59.274 13:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:59.274 13:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:59.274 13:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:59.274 13:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.274 13:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:59.274 13:47:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.274 13:47:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.274 13:47:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.274 13:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:59.274 "name": "raid_bdev1", 00:26:59.274 "uuid": "9dfa28ef-050f-4e61-a376-79adf8db4d8a", 00:26:59.274 "strip_size_kb": 0, 00:26:59.274 "state": "online", 00:26:59.274 "raid_level": "raid1", 00:26:59.274 "superblock": true, 00:26:59.274 "num_base_bdevs": 2, 00:26:59.274 "num_base_bdevs_discovered": 1, 00:26:59.274 "num_base_bdevs_operational": 1, 00:26:59.274 "base_bdevs_list": [ 00:26:59.274 { 00:26:59.274 "name": null, 00:26:59.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:59.274 "is_configured": false, 00:26:59.274 "data_offset": 0, 00:26:59.274 "data_size": 63488 00:26:59.274 }, 00:26:59.274 { 00:26:59.274 "name": "BaseBdev2", 00:26:59.274 "uuid": "2422c89f-4f48-552d-ae78-e83502fbab7a", 00:26:59.274 "is_configured": true, 00:26:59.274 "data_offset": 2048, 00:26:59.274 "data_size": 63488 00:26:59.274 } 00:26:59.274 ] 00:26:59.274 }' 00:26:59.274 13:47:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:59.274 13:47:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.533 13:47:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:59.533 13:47:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:26:59.533 13:47:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:26:59.533 13:47:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:26:59.533 13:47:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:26:59.533 13:47:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:59.533 13:47:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.533 13:47:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.533 13:47:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.533 13:47:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.533 13:47:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:26:59.533 "name": "raid_bdev1", 00:26:59.533 "uuid": "9dfa28ef-050f-4e61-a376-79adf8db4d8a", 00:26:59.533 "strip_size_kb": 0, 00:26:59.533 "state": "online", 00:26:59.533 "raid_level": "raid1", 00:26:59.533 "superblock": true, 00:26:59.533 "num_base_bdevs": 2, 00:26:59.533 "num_base_bdevs_discovered": 1, 00:26:59.533 "num_base_bdevs_operational": 1, 00:26:59.533 "base_bdevs_list": [ 00:26:59.533 { 00:26:59.533 "name": null, 00:26:59.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:59.533 "is_configured": false, 00:26:59.533 "data_offset": 0, 00:26:59.533 "data_size": 63488 00:26:59.533 }, 00:26:59.533 { 00:26:59.533 "name": "BaseBdev2", 00:26:59.533 "uuid": "2422c89f-4f48-552d-ae78-e83502fbab7a", 00:26:59.533 "is_configured": true, 00:26:59.533 "data_offset": 2048, 00:26:59.533 "data_size": 63488 00:26:59.533 } 00:26:59.533 ] 00:26:59.533 }' 00:26:59.533 13:47:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:26:59.533 13:47:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:26:59.533 13:47:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:26:59.792 13:47:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:26:59.792 13:47:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:59.792 13:47:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:26:59.792 13:47:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:59.792 13:47:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:59.792 13:47:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:59.792 13:47:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:59.792 13:47:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:59.792 13:47:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:59.792 13:47:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.792 13:47:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:59.792 [2024-11-20 13:47:02.481361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:59.792 [2024-11-20 13:47:02.481718] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:59.792 [2024-11-20 13:47:02.481755] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:59.792 request: 00:26:59.792 { 00:26:59.792 "base_bdev": "BaseBdev1", 00:26:59.792 "raid_bdev": "raid_bdev1", 00:26:59.792 "method": "bdev_raid_add_base_bdev", 00:26:59.792 "req_id": 1 00:26:59.792 } 00:26:59.792 Got JSON-RPC error response 00:26:59.792 response: 00:26:59.792 { 00:26:59.792 "code": -22, 00:26:59.792 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:26:59.792 } 00:26:59.792 13:47:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:59.792 13:47:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:26:59.792 13:47:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:59.792 13:47:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:59.792 13:47:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:59.792 13:47:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:27:00.727 13:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:00.728 13:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:00.728 13:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:00.728 13:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:00.728 13:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:00.728 13:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:00.728 13:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:00.728 13:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:00.728 13:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:00.728 13:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:00.728 13:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:00.728 13:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:00.728 13:47:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.728 13:47:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:00.728 13:47:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.728 13:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:00.728 "name": "raid_bdev1", 00:27:00.728 "uuid": "9dfa28ef-050f-4e61-a376-79adf8db4d8a", 00:27:00.728 "strip_size_kb": 0, 00:27:00.728 "state": "online", 00:27:00.728 "raid_level": "raid1", 00:27:00.728 "superblock": true, 00:27:00.728 "num_base_bdevs": 2, 00:27:00.728 "num_base_bdevs_discovered": 1, 00:27:00.728 "num_base_bdevs_operational": 1, 00:27:00.728 "base_bdevs_list": [ 00:27:00.728 { 00:27:00.728 "name": null, 00:27:00.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:00.728 "is_configured": false, 00:27:00.728 "data_offset": 0, 00:27:00.728 "data_size": 63488 00:27:00.728 }, 00:27:00.728 { 00:27:00.728 "name": "BaseBdev2", 00:27:00.728 "uuid": "2422c89f-4f48-552d-ae78-e83502fbab7a", 00:27:00.728 "is_configured": true, 00:27:00.728 "data_offset": 2048, 00:27:00.728 "data_size": 63488 00:27:00.728 } 00:27:00.728 ] 00:27:00.728 }' 00:27:00.728 13:47:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:00.728 13:47:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:01.295 13:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:01.295 13:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:01.295 13:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:01.295 13:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:01.295 13:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:01.295 13:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:01.295 13:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.295 13:47:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.295 13:47:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:01.295 13:47:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.295 13:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:01.295 "name": "raid_bdev1", 00:27:01.295 "uuid": "9dfa28ef-050f-4e61-a376-79adf8db4d8a", 00:27:01.295 "strip_size_kb": 0, 00:27:01.295 "state": "online", 00:27:01.295 "raid_level": "raid1", 00:27:01.295 "superblock": true, 00:27:01.295 "num_base_bdevs": 2, 00:27:01.295 "num_base_bdevs_discovered": 1, 00:27:01.295 "num_base_bdevs_operational": 1, 00:27:01.295 "base_bdevs_list": [ 00:27:01.295 { 00:27:01.295 "name": null, 00:27:01.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.295 "is_configured": false, 00:27:01.295 "data_offset": 0, 00:27:01.295 "data_size": 63488 00:27:01.295 }, 00:27:01.295 { 00:27:01.295 "name": "BaseBdev2", 00:27:01.295 "uuid": "2422c89f-4f48-552d-ae78-e83502fbab7a", 00:27:01.295 "is_configured": true, 00:27:01.295 "data_offset": 2048, 00:27:01.295 "data_size": 63488 00:27:01.295 } 00:27:01.295 ] 00:27:01.295 }' 00:27:01.295 13:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:01.295 13:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:01.295 13:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:01.295 13:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:01.295 13:47:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76141 00:27:01.295 13:47:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76141 ']' 00:27:01.295 13:47:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 76141 00:27:01.295 13:47:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:27:01.295 13:47:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:01.295 13:47:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76141 00:27:01.554 killing process with pid 76141 00:27:01.554 Received shutdown signal, test time was about 60.000000 seconds 00:27:01.554 00:27:01.554 Latency(us) 00:27:01.554 [2024-11-20T13:47:04.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.554 [2024-11-20T13:47:04.471Z] =================================================================================================================== 00:27:01.554 [2024-11-20T13:47:04.471Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:01.554 13:47:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:01.554 13:47:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:01.554 13:47:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76141' 00:27:01.554 13:47:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 76141 00:27:01.554 [2024-11-20 13:47:04.214326] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:01.554 13:47:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 76141 00:27:01.554 [2024-11-20 13:47:04.214482] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:01.554 [2024-11-20 13:47:04.214552] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:01.554 [2024-11-20 13:47:04.214572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:27:01.813 [2024-11-20 13:47:04.476222] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:02.745 13:47:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:27:02.745 00:27:02.745 real 0m27.184s 00:27:02.745 user 0m33.549s 00:27:02.745 sys 0m4.046s 00:27:02.745 13:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:02.745 ************************************ 00:27:02.745 END TEST raid_rebuild_test_sb 00:27:02.745 ************************************ 00:27:02.745 13:47:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:02.745 13:47:05 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:27:02.745 13:47:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:27:02.745 13:47:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:02.745 13:47:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:02.745 ************************************ 00:27:02.745 START TEST raid_rebuild_test_io 00:27:02.745 ************************************ 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:02.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76910 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76910 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76910 ']' 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:02.746 13:47:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:03.004 [2024-11-20 13:47:05.698448] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:27:03.004 [2024-11-20 13:47:05.699155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76910 ] 00:27:03.004 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:03.004 Zero copy mechanism will not be used. 00:27:03.004 [2024-11-20 13:47:05.894051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.262 [2024-11-20 13:47:06.053013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.521 [2024-11-20 13:47:06.288710] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:03.521 [2024-11-20 13:47:06.289017] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:04.136 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:04.136 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:27:04.136 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:04.136 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:04.136 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.136 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:04.136 BaseBdev1_malloc 00:27:04.136 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.136 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:04.136 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.136 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:04.136 [2024-11-20 13:47:06.778623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:04.136 [2024-11-20 13:47:06.778941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:04.136 [2024-11-20 13:47:06.779009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:04.137 [2024-11-20 13:47:06.779047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:04.137 [2024-11-20 13:47:06.782279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:04.137 [2024-11-20 13:47:06.782343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:04.137 BaseBdev1 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:04.137 BaseBdev2_malloc 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:04.137 [2024-11-20 13:47:06.831671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:04.137 [2024-11-20 13:47:06.831755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:04.137 [2024-11-20 13:47:06.831791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:04.137 [2024-11-20 13:47:06.831811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:04.137 [2024-11-20 13:47:06.834633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:04.137 [2024-11-20 13:47:06.834684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:04.137 BaseBdev2 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:04.137 spare_malloc 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:04.137 spare_delay 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:04.137 [2024-11-20 13:47:06.898102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:04.137 [2024-11-20 13:47:06.898317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:04.137 [2024-11-20 13:47:06.898373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:04.137 [2024-11-20 13:47:06.898395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:04.137 [2024-11-20 13:47:06.901319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:04.137 [2024-11-20 13:47:06.901389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:04.137 spare 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:04.137 [2024-11-20 13:47:06.906291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:04.137 [2024-11-20 13:47:06.908899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:04.137 [2024-11-20 13:47:06.909058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:04.137 [2024-11-20 13:47:06.909083] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:04.137 [2024-11-20 13:47:06.909444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:04.137 [2024-11-20 13:47:06.909659] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:04.137 [2024-11-20 13:47:06.909678] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:04.137 [2024-11-20 13:47:06.909878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:04.137 "name": "raid_bdev1", 00:27:04.137 "uuid": "b9048dd0-b6e1-4fcc-b502-6defed3a4c46", 00:27:04.137 "strip_size_kb": 0, 00:27:04.137 "state": "online", 00:27:04.137 "raid_level": "raid1", 00:27:04.137 "superblock": false, 00:27:04.137 "num_base_bdevs": 2, 00:27:04.137 "num_base_bdevs_discovered": 2, 00:27:04.137 "num_base_bdevs_operational": 2, 00:27:04.137 "base_bdevs_list": [ 00:27:04.137 { 00:27:04.137 "name": "BaseBdev1", 00:27:04.137 "uuid": "738b571b-c94a-5793-a4b6-cef1776f8850", 00:27:04.137 "is_configured": true, 00:27:04.137 "data_offset": 0, 00:27:04.137 "data_size": 65536 00:27:04.137 }, 00:27:04.137 { 00:27:04.137 "name": "BaseBdev2", 00:27:04.137 "uuid": "59ce830f-7d04-59e1-9ce3-800832b94950", 00:27:04.137 "is_configured": true, 00:27:04.137 "data_offset": 0, 00:27:04.137 "data_size": 65536 00:27:04.137 } 00:27:04.137 ] 00:27:04.137 }' 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:04.137 13:47:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:04.705 [2024-11-20 13:47:07.402774] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:04.705 [2024-11-20 13:47:07.502426] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:04.705 "name": "raid_bdev1", 00:27:04.705 "uuid": "b9048dd0-b6e1-4fcc-b502-6defed3a4c46", 00:27:04.705 "strip_size_kb": 0, 00:27:04.705 "state": "online", 00:27:04.705 "raid_level": "raid1", 00:27:04.705 "superblock": false, 00:27:04.705 "num_base_bdevs": 2, 00:27:04.705 "num_base_bdevs_discovered": 1, 00:27:04.705 "num_base_bdevs_operational": 1, 00:27:04.705 "base_bdevs_list": [ 00:27:04.705 { 00:27:04.705 "name": null, 00:27:04.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.705 "is_configured": false, 00:27:04.705 "data_offset": 0, 00:27:04.705 "data_size": 65536 00:27:04.705 }, 00:27:04.705 { 00:27:04.705 "name": "BaseBdev2", 00:27:04.705 "uuid": "59ce830f-7d04-59e1-9ce3-800832b94950", 00:27:04.705 "is_configured": true, 00:27:04.705 "data_offset": 0, 00:27:04.705 "data_size": 65536 00:27:04.705 } 00:27:04.705 ] 00:27:04.705 }' 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:04.705 13:47:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:04.964 [2024-11-20 13:47:07.627036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:27:04.964 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:04.964 Zero copy mechanism will not be used. 00:27:04.964 Running I/O for 60 seconds... 00:27:05.222 13:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:05.222 13:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.222 13:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:05.222 [2024-11-20 13:47:08.053039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:05.222 13:47:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.222 13:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:27:05.222 [2024-11-20 13:47:08.125826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:27:05.222 [2024-11-20 13:47:08.128452] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:05.481 [2024-11-20 13:47:08.247942] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:05.481 [2024-11-20 13:47:08.248634] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:05.740 [2024-11-20 13:47:08.451070] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:05.740 [2024-11-20 13:47:08.451460] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:05.999 147.00 IOPS, 441.00 MiB/s [2024-11-20T13:47:08.916Z] [2024-11-20 13:47:08.707286] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:05.999 [2024-11-20 13:47:08.708076] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:06.258 [2024-11-20 13:47:08.920391] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:06.258 [2024-11-20 13:47:08.921098] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:06.258 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:06.258 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:06.258 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:06.259 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:06.259 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:06.259 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:06.259 13:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.259 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:06.259 13:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:06.259 13:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.259 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:06.259 "name": "raid_bdev1", 00:27:06.259 "uuid": "b9048dd0-b6e1-4fcc-b502-6defed3a4c46", 00:27:06.259 "strip_size_kb": 0, 00:27:06.259 "state": "online", 00:27:06.259 "raid_level": "raid1", 00:27:06.259 "superblock": false, 00:27:06.259 "num_base_bdevs": 2, 00:27:06.259 "num_base_bdevs_discovered": 2, 00:27:06.259 "num_base_bdevs_operational": 2, 00:27:06.259 "process": { 00:27:06.259 "type": "rebuild", 00:27:06.259 "target": "spare", 00:27:06.259 "progress": { 00:27:06.259 "blocks": 10240, 00:27:06.259 "percent": 15 00:27:06.259 } 00:27:06.259 }, 00:27:06.259 "base_bdevs_list": [ 00:27:06.259 { 00:27:06.259 "name": "spare", 00:27:06.259 "uuid": "115c0f14-37f9-5b4a-be0c-0c9422d26a87", 00:27:06.259 "is_configured": true, 00:27:06.259 "data_offset": 0, 00:27:06.259 "data_size": 65536 00:27:06.259 }, 00:27:06.259 { 00:27:06.259 "name": "BaseBdev2", 00:27:06.259 "uuid": "59ce830f-7d04-59e1-9ce3-800832b94950", 00:27:06.259 "is_configured": true, 00:27:06.259 "data_offset": 0, 00:27:06.259 "data_size": 65536 00:27:06.259 } 00:27:06.259 ] 00:27:06.259 }' 00:27:06.259 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:06.516 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:06.516 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:06.516 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:06.516 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:06.516 13:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.516 13:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:06.516 [2024-11-20 13:47:09.275351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:06.516 [2024-11-20 13:47:09.300306] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:06.516 [2024-11-20 13:47:09.400472] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:06.516 [2024-11-20 13:47:09.412348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:06.516 [2024-11-20 13:47:09.412449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:06.516 [2024-11-20 13:47:09.412479] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:06.774 [2024-11-20 13:47:09.457100] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:27:06.774 13:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.774 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:06.774 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:06.774 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:06.774 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:06.774 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:06.774 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:06.774 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:06.774 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:06.774 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:06.774 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:06.774 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:06.774 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:06.774 13:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.774 13:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:06.774 13:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.774 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:06.774 "name": "raid_bdev1", 00:27:06.774 "uuid": "b9048dd0-b6e1-4fcc-b502-6defed3a4c46", 00:27:06.774 "strip_size_kb": 0, 00:27:06.774 "state": "online", 00:27:06.774 "raid_level": "raid1", 00:27:06.774 "superblock": false, 00:27:06.774 "num_base_bdevs": 2, 00:27:06.774 "num_base_bdevs_discovered": 1, 00:27:06.774 "num_base_bdevs_operational": 1, 00:27:06.774 "base_bdevs_list": [ 00:27:06.774 { 00:27:06.774 "name": null, 00:27:06.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:06.774 "is_configured": false, 00:27:06.774 "data_offset": 0, 00:27:06.774 "data_size": 65536 00:27:06.774 }, 00:27:06.774 { 00:27:06.774 "name": "BaseBdev2", 00:27:06.774 "uuid": "59ce830f-7d04-59e1-9ce3-800832b94950", 00:27:06.774 "is_configured": true, 00:27:06.774 "data_offset": 0, 00:27:06.774 "data_size": 65536 00:27:06.774 } 00:27:06.774 ] 00:27:06.774 }' 00:27:06.774 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:06.774 13:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:07.032 125.00 IOPS, 375.00 MiB/s [2024-11-20T13:47:09.949Z] 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:07.032 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:07.032 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:07.290 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:07.290 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:07.290 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:07.290 13:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.290 13:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:07.290 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:07.290 13:47:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.290 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:07.290 "name": "raid_bdev1", 00:27:07.290 "uuid": "b9048dd0-b6e1-4fcc-b502-6defed3a4c46", 00:27:07.290 "strip_size_kb": 0, 00:27:07.290 "state": "online", 00:27:07.290 "raid_level": "raid1", 00:27:07.290 "superblock": false, 00:27:07.290 "num_base_bdevs": 2, 00:27:07.290 "num_base_bdevs_discovered": 1, 00:27:07.290 "num_base_bdevs_operational": 1, 00:27:07.290 "base_bdevs_list": [ 00:27:07.290 { 00:27:07.290 "name": null, 00:27:07.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:07.290 "is_configured": false, 00:27:07.290 "data_offset": 0, 00:27:07.290 "data_size": 65536 00:27:07.290 }, 00:27:07.290 { 00:27:07.290 "name": "BaseBdev2", 00:27:07.290 "uuid": "59ce830f-7d04-59e1-9ce3-800832b94950", 00:27:07.290 "is_configured": true, 00:27:07.290 "data_offset": 0, 00:27:07.290 "data_size": 65536 00:27:07.290 } 00:27:07.290 ] 00:27:07.290 }' 00:27:07.290 13:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:07.290 13:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:07.290 13:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:07.290 13:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:07.290 13:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:07.290 13:47:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.290 13:47:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:07.290 [2024-11-20 13:47:10.116466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:07.290 13:47:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.290 13:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:27:07.290 [2024-11-20 13:47:10.183208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:27:07.290 [2024-11-20 13:47:10.186011] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:07.548 [2024-11-20 13:47:10.305561] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:07.548 [2024-11-20 13:47:10.306120] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:07.807 [2024-11-20 13:47:10.543921] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:07.807 [2024-11-20 13:47:10.544339] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:08.064 145.33 IOPS, 436.00 MiB/s [2024-11-20T13:47:10.981Z] [2024-11-20 13:47:10.914857] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:08.064 [2024-11-20 13:47:10.915586] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:08.322 [2024-11-20 13:47:11.146579] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:08.322 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:08.322 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:08.322 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:08.322 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:08.322 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:08.323 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:08.323 13:47:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.323 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:08.323 13:47:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:08.323 13:47:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.323 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:08.323 "name": "raid_bdev1", 00:27:08.323 "uuid": "b9048dd0-b6e1-4fcc-b502-6defed3a4c46", 00:27:08.323 "strip_size_kb": 0, 00:27:08.323 "state": "online", 00:27:08.323 "raid_level": "raid1", 00:27:08.323 "superblock": false, 00:27:08.323 "num_base_bdevs": 2, 00:27:08.323 "num_base_bdevs_discovered": 2, 00:27:08.323 "num_base_bdevs_operational": 2, 00:27:08.323 "process": { 00:27:08.323 "type": "rebuild", 00:27:08.323 "target": "spare", 00:27:08.323 "progress": { 00:27:08.323 "blocks": 10240, 00:27:08.323 "percent": 15 00:27:08.323 } 00:27:08.323 }, 00:27:08.323 "base_bdevs_list": [ 00:27:08.323 { 00:27:08.323 "name": "spare", 00:27:08.323 "uuid": "115c0f14-37f9-5b4a-be0c-0c9422d26a87", 00:27:08.323 "is_configured": true, 00:27:08.323 "data_offset": 0, 00:27:08.323 "data_size": 65536 00:27:08.323 }, 00:27:08.323 { 00:27:08.323 "name": "BaseBdev2", 00:27:08.323 "uuid": "59ce830f-7d04-59e1-9ce3-800832b94950", 00:27:08.323 "is_configured": true, 00:27:08.323 "data_offset": 0, 00:27:08.323 "data_size": 65536 00:27:08.323 } 00:27:08.323 ] 00:27:08.323 }' 00:27:08.323 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=442 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:08.581 "name": "raid_bdev1", 00:27:08.581 "uuid": "b9048dd0-b6e1-4fcc-b502-6defed3a4c46", 00:27:08.581 "strip_size_kb": 0, 00:27:08.581 "state": "online", 00:27:08.581 "raid_level": "raid1", 00:27:08.581 "superblock": false, 00:27:08.581 "num_base_bdevs": 2, 00:27:08.581 "num_base_bdevs_discovered": 2, 00:27:08.581 "num_base_bdevs_operational": 2, 00:27:08.581 "process": { 00:27:08.581 "type": "rebuild", 00:27:08.581 "target": "spare", 00:27:08.581 "progress": { 00:27:08.581 "blocks": 12288, 00:27:08.581 "percent": 18 00:27:08.581 } 00:27:08.581 }, 00:27:08.581 "base_bdevs_list": [ 00:27:08.581 { 00:27:08.581 "name": "spare", 00:27:08.581 "uuid": "115c0f14-37f9-5b4a-be0c-0c9422d26a87", 00:27:08.581 "is_configured": true, 00:27:08.581 "data_offset": 0, 00:27:08.581 "data_size": 65536 00:27:08.581 }, 00:27:08.581 { 00:27:08.581 "name": "BaseBdev2", 00:27:08.581 "uuid": "59ce830f-7d04-59e1-9ce3-800832b94950", 00:27:08.581 "is_configured": true, 00:27:08.581 "data_offset": 0, 00:27:08.581 "data_size": 65536 00:27:08.581 } 00:27:08.581 ] 00:27:08.581 }' 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:08.581 13:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:09.097 137.50 IOPS, 412.50 MiB/s [2024-11-20T13:47:12.014Z] [2024-11-20 13:47:11.822578] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:27:09.097 [2024-11-20 13:47:11.936095] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:27:09.097 [2024-11-20 13:47:11.936503] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:27:09.663 13:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:09.663 13:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:09.663 13:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:09.663 13:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:09.663 13:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:09.663 13:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:09.663 13:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:09.663 13:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:09.663 13:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.663 13:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:09.663 13:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.663 [2024-11-20 13:47:12.500612] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:27:09.663 13:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:09.663 "name": "raid_bdev1", 00:27:09.663 "uuid": "b9048dd0-b6e1-4fcc-b502-6defed3a4c46", 00:27:09.663 "strip_size_kb": 0, 00:27:09.663 "state": "online", 00:27:09.663 "raid_level": "raid1", 00:27:09.663 "superblock": false, 00:27:09.663 "num_base_bdevs": 2, 00:27:09.663 "num_base_bdevs_discovered": 2, 00:27:09.663 "num_base_bdevs_operational": 2, 00:27:09.663 "process": { 00:27:09.664 "type": "rebuild", 00:27:09.664 "target": "spare", 00:27:09.664 "progress": { 00:27:09.664 "blocks": 30720, 00:27:09.664 "percent": 46 00:27:09.664 } 00:27:09.664 }, 00:27:09.664 "base_bdevs_list": [ 00:27:09.664 { 00:27:09.664 "name": "spare", 00:27:09.664 "uuid": "115c0f14-37f9-5b4a-be0c-0c9422d26a87", 00:27:09.664 "is_configured": true, 00:27:09.664 "data_offset": 0, 00:27:09.664 "data_size": 65536 00:27:09.664 }, 00:27:09.664 { 00:27:09.664 "name": "BaseBdev2", 00:27:09.664 "uuid": "59ce830f-7d04-59e1-9ce3-800832b94950", 00:27:09.664 "is_configured": true, 00:27:09.664 "data_offset": 0, 00:27:09.664 "data_size": 65536 00:27:09.664 } 00:27:09.664 ] 00:27:09.664 }' 00:27:09.664 13:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:09.664 13:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:09.664 13:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:09.947 13:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:09.947 13:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:10.224 125.80 IOPS, 377.40 MiB/s [2024-11-20T13:47:13.141Z] [2024-11-20 13:47:12.942854] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:27:10.791 13:47:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:10.791 13:47:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:10.791 13:47:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:10.791 13:47:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:10.791 13:47:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:10.791 13:47:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:10.791 13:47:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:10.791 13:47:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.791 13:47:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:10.791 13:47:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:10.791 13:47:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.791 113.17 IOPS, 339.50 MiB/s [2024-11-20T13:47:13.708Z] 13:47:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:10.791 "name": "raid_bdev1", 00:27:10.791 "uuid": "b9048dd0-b6e1-4fcc-b502-6defed3a4c46", 00:27:10.791 "strip_size_kb": 0, 00:27:10.791 "state": "online", 00:27:10.791 "raid_level": "raid1", 00:27:10.791 "superblock": false, 00:27:10.791 "num_base_bdevs": 2, 00:27:10.791 "num_base_bdevs_discovered": 2, 00:27:10.791 "num_base_bdevs_operational": 2, 00:27:10.791 "process": { 00:27:10.791 "type": "rebuild", 00:27:10.791 "target": "spare", 00:27:10.791 "progress": { 00:27:10.791 "blocks": 49152, 00:27:10.791 "percent": 75 00:27:10.791 } 00:27:10.791 }, 00:27:10.791 "base_bdevs_list": [ 00:27:10.791 { 00:27:10.791 "name": "spare", 00:27:10.791 "uuid": "115c0f14-37f9-5b4a-be0c-0c9422d26a87", 00:27:10.791 "is_configured": true, 00:27:10.791 "data_offset": 0, 00:27:10.791 "data_size": 65536 00:27:10.791 }, 00:27:10.791 { 00:27:10.791 "name": "BaseBdev2", 00:27:10.791 "uuid": "59ce830f-7d04-59e1-9ce3-800832b94950", 00:27:10.791 "is_configured": true, 00:27:10.791 "data_offset": 0, 00:27:10.791 "data_size": 65536 00:27:10.791 } 00:27:10.791 ] 00:27:10.791 }' 00:27:10.791 13:47:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:11.049 13:47:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:11.049 13:47:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:11.049 13:47:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:11.049 13:47:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:11.049 [2024-11-20 13:47:13.826171] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:27:11.308 [2024-11-20 13:47:14.163217] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:27:11.875 [2024-11-20 13:47:14.604438] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:11.875 101.14 IOPS, 303.43 MiB/s [2024-11-20T13:47:14.792Z] [2024-11-20 13:47:14.712296] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:11.875 [2024-11-20 13:47:14.714931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:11.875 13:47:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:11.875 13:47:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:11.875 13:47:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:11.875 13:47:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:11.875 13:47:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:11.875 13:47:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:11.875 13:47:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:11.875 13:47:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.875 13:47:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.875 13:47:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:12.134 13:47:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.134 13:47:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:12.134 "name": "raid_bdev1", 00:27:12.134 "uuid": "b9048dd0-b6e1-4fcc-b502-6defed3a4c46", 00:27:12.134 "strip_size_kb": 0, 00:27:12.134 "state": "online", 00:27:12.134 "raid_level": "raid1", 00:27:12.134 "superblock": false, 00:27:12.134 "num_base_bdevs": 2, 00:27:12.134 "num_base_bdevs_discovered": 2, 00:27:12.134 "num_base_bdevs_operational": 2, 00:27:12.134 "base_bdevs_list": [ 00:27:12.134 { 00:27:12.134 "name": "spare", 00:27:12.134 "uuid": "115c0f14-37f9-5b4a-be0c-0c9422d26a87", 00:27:12.134 "is_configured": true, 00:27:12.134 "data_offset": 0, 00:27:12.134 "data_size": 65536 00:27:12.134 }, 00:27:12.134 { 00:27:12.134 "name": "BaseBdev2", 00:27:12.134 "uuid": "59ce830f-7d04-59e1-9ce3-800832b94950", 00:27:12.134 "is_configured": true, 00:27:12.134 "data_offset": 0, 00:27:12.134 "data_size": 65536 00:27:12.134 } 00:27:12.134 ] 00:27:12.134 }' 00:27:12.134 13:47:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:12.134 13:47:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:12.134 13:47:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:12.134 13:47:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:27:12.134 13:47:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:27:12.134 13:47:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:12.134 13:47:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:12.134 13:47:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:12.134 13:47:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:12.134 13:47:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:12.134 13:47:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:12.134 13:47:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.134 13:47:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.134 13:47:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:12.134 13:47:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.134 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:12.134 "name": "raid_bdev1", 00:27:12.134 "uuid": "b9048dd0-b6e1-4fcc-b502-6defed3a4c46", 00:27:12.134 "strip_size_kb": 0, 00:27:12.134 "state": "online", 00:27:12.134 "raid_level": "raid1", 00:27:12.134 "superblock": false, 00:27:12.134 "num_base_bdevs": 2, 00:27:12.134 "num_base_bdevs_discovered": 2, 00:27:12.134 "num_base_bdevs_operational": 2, 00:27:12.134 "base_bdevs_list": [ 00:27:12.134 { 00:27:12.134 "name": "spare", 00:27:12.134 "uuid": "115c0f14-37f9-5b4a-be0c-0c9422d26a87", 00:27:12.134 "is_configured": true, 00:27:12.134 "data_offset": 0, 00:27:12.134 "data_size": 65536 00:27:12.134 }, 00:27:12.134 { 00:27:12.134 "name": "BaseBdev2", 00:27:12.134 "uuid": "59ce830f-7d04-59e1-9ce3-800832b94950", 00:27:12.134 "is_configured": true, 00:27:12.134 "data_offset": 0, 00:27:12.134 "data_size": 65536 00:27:12.134 } 00:27:12.134 ] 00:27:12.134 }' 00:27:12.134 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:12.392 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:12.392 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:12.392 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:12.392 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:12.392 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:12.392 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:12.392 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:12.392 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:12.392 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:12.392 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:12.392 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:12.392 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:12.392 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:12.392 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.392 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:12.392 13:47:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.392 13:47:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:12.392 13:47:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.392 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:12.392 "name": "raid_bdev1", 00:27:12.392 "uuid": "b9048dd0-b6e1-4fcc-b502-6defed3a4c46", 00:27:12.392 "strip_size_kb": 0, 00:27:12.392 "state": "online", 00:27:12.392 "raid_level": "raid1", 00:27:12.392 "superblock": false, 00:27:12.392 "num_base_bdevs": 2, 00:27:12.392 "num_base_bdevs_discovered": 2, 00:27:12.392 "num_base_bdevs_operational": 2, 00:27:12.392 "base_bdevs_list": [ 00:27:12.392 { 00:27:12.392 "name": "spare", 00:27:12.392 "uuid": "115c0f14-37f9-5b4a-be0c-0c9422d26a87", 00:27:12.392 "is_configured": true, 00:27:12.392 "data_offset": 0, 00:27:12.392 "data_size": 65536 00:27:12.392 }, 00:27:12.392 { 00:27:12.392 "name": "BaseBdev2", 00:27:12.392 "uuid": "59ce830f-7d04-59e1-9ce3-800832b94950", 00:27:12.392 "is_configured": true, 00:27:12.392 "data_offset": 0, 00:27:12.392 "data_size": 65536 00:27:12.392 } 00:27:12.392 ] 00:27:12.392 }' 00:27:12.392 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:12.392 13:47:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:12.959 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:12.959 13:47:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.959 13:47:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:12.959 [2024-11-20 13:47:15.637138] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:12.959 [2024-11-20 13:47:15.637184] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:12.959 93.88 IOPS, 281.62 MiB/s 00:27:12.959 Latency(us) 00:27:12.959 [2024-11-20T13:47:15.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.959 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:27:12.959 raid_bdev1 : 8.11 92.88 278.64 0.00 0.00 14256.17 307.20 119156.36 00:27:12.959 [2024-11-20T13:47:15.876Z] =================================================================================================================== 00:27:12.959 [2024-11-20T13:47:15.876Z] Total : 92.88 278.64 0.00 0.00 14256.17 307.20 119156.36 00:27:12.959 { 00:27:12.959 "results": [ 00:27:12.959 { 00:27:12.959 "job": "raid_bdev1", 00:27:12.959 "core_mask": "0x1", 00:27:12.959 "workload": "randrw", 00:27:12.959 "percentage": 50, 00:27:12.959 "status": "finished", 00:27:12.959 "queue_depth": 2, 00:27:12.959 "io_size": 3145728, 00:27:12.959 "runtime": 8.107157, 00:27:12.959 "iops": 92.88089523861447, 00:27:12.959 "mibps": 278.6426857158434, 00:27:12.959 "io_failed": 0, 00:27:12.959 "io_timeout": 0, 00:27:12.959 "avg_latency_us": 14256.166818785463, 00:27:12.959 "min_latency_us": 307.2, 00:27:12.959 "max_latency_us": 119156.36363636363 00:27:12.959 } 00:27:12.959 ], 00:27:12.959 "core_count": 1 00:27:12.959 } 00:27:12.959 [2024-11-20 13:47:15.757270] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:12.959 [2024-11-20 13:47:15.757376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:12.959 [2024-11-20 13:47:15.757494] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:12.959 [2024-11-20 13:47:15.757520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:12.959 13:47:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.959 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:12.959 13:47:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.959 13:47:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:12.959 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:27:12.959 13:47:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.959 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:27:12.959 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:27:12.959 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:27:12.959 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:27:12.959 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:12.959 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:27:12.959 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:12.959 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:12.959 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:12.959 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:27:12.959 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:12.959 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:12.959 13:47:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:27:13.526 /dev/nbd0 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:13.526 1+0 records in 00:27:13.526 1+0 records out 00:27:13.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384407 s, 10.7 MB/s 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:13.526 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:27:13.785 /dev/nbd1 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:13.785 1+0 records in 00:27:13.785 1+0 records out 00:27:13.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736799 s, 5.6 MB/s 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:27:13.785 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:13.786 13:47:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:27:14.352 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:14.352 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:14.352 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:14.352 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:14.352 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:14.352 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:14.352 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:27:14.352 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:14.352 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:27:14.352 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:14.352 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:14.352 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:14.352 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:27:14.352 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:14.353 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:14.611 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:14.611 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:14.611 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:14.611 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:14.611 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:14.611 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:14.611 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:27:14.611 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:14.611 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:27:14.611 13:47:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76910 00:27:14.611 13:47:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76910 ']' 00:27:14.611 13:47:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76910 00:27:14.611 13:47:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:27:14.611 13:47:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:14.611 13:47:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76910 00:27:14.611 13:47:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:14.611 13:47:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:14.611 killing process with pid 76910 00:27:14.611 13:47:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76910' 00:27:14.611 13:47:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76910 00:27:14.611 Received shutdown signal, test time was about 9.721111 seconds 00:27:14.611 00:27:14.611 Latency(us) 00:27:14.611 [2024-11-20T13:47:17.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.611 [2024-11-20T13:47:17.528Z] =================================================================================================================== 00:27:14.611 [2024-11-20T13:47:17.528Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:14.611 [2024-11-20 13:47:17.351002] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:14.611 13:47:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76910 00:27:14.869 [2024-11-20 13:47:17.562992] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:15.805 13:47:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:27:15.805 00:27:15.805 real 0m13.106s 00:27:15.805 user 0m17.163s 00:27:15.805 sys 0m1.443s 00:27:15.805 13:47:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:15.805 13:47:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:15.805 ************************************ 00:27:15.805 END TEST raid_rebuild_test_io 00:27:15.805 ************************************ 00:27:16.064 13:47:18 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:27:16.064 13:47:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:27:16.064 13:47:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:16.064 13:47:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:16.064 ************************************ 00:27:16.064 START TEST raid_rebuild_test_sb_io 00:27:16.064 ************************************ 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77297 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77297 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77297 ']' 00:27:16.064 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.065 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:16.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.065 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.065 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:16.065 13:47:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:16.065 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:16.065 Zero copy mechanism will not be used. 00:27:16.065 [2024-11-20 13:47:18.853739] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:27:16.065 [2024-11-20 13:47:18.853951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77297 ] 00:27:16.323 [2024-11-20 13:47:19.040354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.323 [2024-11-20 13:47:19.178512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.580 [2024-11-20 13:47:19.410505] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:16.580 [2024-11-20 13:47:19.410577] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:17.148 BaseBdev1_malloc 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:17.148 [2024-11-20 13:47:19.876527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:17.148 [2024-11-20 13:47:19.876748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:17.148 [2024-11-20 13:47:19.876794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:17.148 [2024-11-20 13:47:19.876816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:17.148 [2024-11-20 13:47:19.879664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:17.148 [2024-11-20 13:47:19.879727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:17.148 BaseBdev1 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:17.148 BaseBdev2_malloc 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:17.148 [2024-11-20 13:47:19.930148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:17.148 [2024-11-20 13:47:19.930229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:17.148 [2024-11-20 13:47:19.930264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:17.148 [2024-11-20 13:47:19.930283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:17.148 [2024-11-20 13:47:19.933060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:17.148 [2024-11-20 13:47:19.933111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:17.148 BaseBdev2 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:17.148 spare_malloc 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.148 13:47:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:17.148 spare_delay 00:27:17.148 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.148 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:17.148 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.148 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:17.148 [2024-11-20 13:47:20.004941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:17.148 [2024-11-20 13:47:20.005017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:17.148 [2024-11-20 13:47:20.005048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:17.148 [2024-11-20 13:47:20.005067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:17.148 [2024-11-20 13:47:20.007981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:17.148 [2024-11-20 13:47:20.008032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:17.148 spare 00:27:17.149 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.149 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:27:17.149 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.149 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:17.149 [2024-11-20 13:47:20.013008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:17.149 [2024-11-20 13:47:20.015477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:17.149 [2024-11-20 13:47:20.015882] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:17.149 [2024-11-20 13:47:20.015937] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:17.149 [2024-11-20 13:47:20.016256] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:17.149 [2024-11-20 13:47:20.016494] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:17.149 [2024-11-20 13:47:20.016512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:17.149 [2024-11-20 13:47:20.016709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:17.149 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.149 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:17.149 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:17.149 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:17.149 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:17.149 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:17.149 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:17.149 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:17.149 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:17.149 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:17.149 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:17.149 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:17.149 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.149 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:17.149 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:17.149 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.408 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:17.408 "name": "raid_bdev1", 00:27:17.408 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:17.408 "strip_size_kb": 0, 00:27:17.408 "state": "online", 00:27:17.408 "raid_level": "raid1", 00:27:17.408 "superblock": true, 00:27:17.408 "num_base_bdevs": 2, 00:27:17.408 "num_base_bdevs_discovered": 2, 00:27:17.408 "num_base_bdevs_operational": 2, 00:27:17.408 "base_bdevs_list": [ 00:27:17.408 { 00:27:17.408 "name": "BaseBdev1", 00:27:17.408 "uuid": "a208e61c-2aa3-5df6-abb5-84f4f883eeda", 00:27:17.408 "is_configured": true, 00:27:17.408 "data_offset": 2048, 00:27:17.408 "data_size": 63488 00:27:17.408 }, 00:27:17.408 { 00:27:17.408 "name": "BaseBdev2", 00:27:17.408 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:17.408 "is_configured": true, 00:27:17.408 "data_offset": 2048, 00:27:17.408 "data_size": 63488 00:27:17.408 } 00:27:17.408 ] 00:27:17.408 }' 00:27:17.408 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:17.408 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:17.666 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:17.666 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:27:17.666 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.666 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:17.666 [2024-11-20 13:47:20.505520] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:17.666 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.666 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:27:17.666 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:17.666 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.666 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:17.666 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:17.666 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.925 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:27:17.925 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:27:17.925 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:27:17.926 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:17.926 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.926 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:17.926 [2024-11-20 13:47:20.605160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:17.926 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.926 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:17.926 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:17.926 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:17.926 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:17.926 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:17.926 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:17.926 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:17.926 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:17.926 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:17.926 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:17.926 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:17.926 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.926 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:17.926 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:17.926 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.926 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:17.926 "name": "raid_bdev1", 00:27:17.926 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:17.926 "strip_size_kb": 0, 00:27:17.926 "state": "online", 00:27:17.926 "raid_level": "raid1", 00:27:17.926 "superblock": true, 00:27:17.926 "num_base_bdevs": 2, 00:27:17.926 "num_base_bdevs_discovered": 1, 00:27:17.926 "num_base_bdevs_operational": 1, 00:27:17.926 "base_bdevs_list": [ 00:27:17.926 { 00:27:17.926 "name": null, 00:27:17.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:17.926 "is_configured": false, 00:27:17.926 "data_offset": 0, 00:27:17.926 "data_size": 63488 00:27:17.926 }, 00:27:17.926 { 00:27:17.926 "name": "BaseBdev2", 00:27:17.926 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:17.926 "is_configured": true, 00:27:17.926 "data_offset": 2048, 00:27:17.926 "data_size": 63488 00:27:17.926 } 00:27:17.926 ] 00:27:17.926 }' 00:27:17.926 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:17.926 13:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:17.926 [2024-11-20 13:47:20.737312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:27:17.926 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:17.926 Zero copy mechanism will not be used. 00:27:17.926 Running I/O for 60 seconds... 00:27:18.209 13:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:18.209 13:47:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.209 13:47:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:18.209 [2024-11-20 13:47:21.123260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:18.468 13:47:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.468 13:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:27:18.468 [2024-11-20 13:47:21.212693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:27:18.468 [2024-11-20 13:47:21.215579] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:18.468 [2024-11-20 13:47:21.333542] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:18.468 [2024-11-20 13:47:21.334639] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:18.726 [2024-11-20 13:47:21.454998] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:18.726 [2024-11-20 13:47:21.455759] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:19.243 142.00 IOPS, 426.00 MiB/s [2024-11-20T13:47:22.160Z] [2024-11-20 13:47:21.904659] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:19.502 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:19.502 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:19.502 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:19.502 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:19.502 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:19.502 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:19.502 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:19.502 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.502 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:19.502 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.502 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:19.502 "name": "raid_bdev1", 00:27:19.502 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:19.502 "strip_size_kb": 0, 00:27:19.502 "state": "online", 00:27:19.502 "raid_level": "raid1", 00:27:19.502 "superblock": true, 00:27:19.502 "num_base_bdevs": 2, 00:27:19.502 "num_base_bdevs_discovered": 2, 00:27:19.502 "num_base_bdevs_operational": 2, 00:27:19.502 "process": { 00:27:19.502 "type": "rebuild", 00:27:19.502 "target": "spare", 00:27:19.502 "progress": { 00:27:19.502 "blocks": 14336, 00:27:19.502 "percent": 22 00:27:19.502 } 00:27:19.502 }, 00:27:19.502 "base_bdevs_list": [ 00:27:19.502 { 00:27:19.502 "name": "spare", 00:27:19.502 "uuid": "48273152-f624-5ca7-a698-67359c5efcb3", 00:27:19.502 "is_configured": true, 00:27:19.502 "data_offset": 2048, 00:27:19.502 "data_size": 63488 00:27:19.502 }, 00:27:19.502 { 00:27:19.502 "name": "BaseBdev2", 00:27:19.502 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:19.502 "is_configured": true, 00:27:19.502 "data_offset": 2048, 00:27:19.502 "data_size": 63488 00:27:19.502 } 00:27:19.502 ] 00:27:19.502 }' 00:27:19.502 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:19.502 [2024-11-20 13:47:22.281100] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:19.502 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:19.502 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:19.502 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:19.502 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:19.502 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.502 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:19.502 [2024-11-20 13:47:22.363710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:19.762 [2024-11-20 13:47:22.517451] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:19.762 [2024-11-20 13:47:22.528999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:19.762 [2024-11-20 13:47:22.529336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:19.762 [2024-11-20 13:47:22.529414] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:19.762 [2024-11-20 13:47:22.582927] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:27:19.762 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.762 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:19.762 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:19.762 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:19.762 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:19.762 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:19.762 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:19.762 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:19.762 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:19.762 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:19.762 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:19.762 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:19.762 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:19.762 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.762 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:19.762 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.762 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:19.762 "name": "raid_bdev1", 00:27:19.762 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:19.762 "strip_size_kb": 0, 00:27:19.762 "state": "online", 00:27:19.762 "raid_level": "raid1", 00:27:19.762 "superblock": true, 00:27:19.762 "num_base_bdevs": 2, 00:27:19.762 "num_base_bdevs_discovered": 1, 00:27:19.762 "num_base_bdevs_operational": 1, 00:27:19.762 "base_bdevs_list": [ 00:27:19.762 { 00:27:19.762 "name": null, 00:27:19.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:19.762 "is_configured": false, 00:27:19.762 "data_offset": 0, 00:27:19.762 "data_size": 63488 00:27:19.762 }, 00:27:19.762 { 00:27:19.762 "name": "BaseBdev2", 00:27:19.762 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:19.762 "is_configured": true, 00:27:19.762 "data_offset": 2048, 00:27:19.762 "data_size": 63488 00:27:19.762 } 00:27:19.762 ] 00:27:19.762 }' 00:27:19.762 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:19.762 13:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:20.279 119.50 IOPS, 358.50 MiB/s [2024-11-20T13:47:23.196Z] 13:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:20.279 13:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:20.279 13:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:20.279 13:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:20.279 13:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:20.279 13:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:20.279 13:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.279 13:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:20.279 13:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:20.279 13:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.279 13:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:20.279 "name": "raid_bdev1", 00:27:20.279 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:20.279 "strip_size_kb": 0, 00:27:20.279 "state": "online", 00:27:20.279 "raid_level": "raid1", 00:27:20.279 "superblock": true, 00:27:20.279 "num_base_bdevs": 2, 00:27:20.279 "num_base_bdevs_discovered": 1, 00:27:20.279 "num_base_bdevs_operational": 1, 00:27:20.279 "base_bdevs_list": [ 00:27:20.279 { 00:27:20.279 "name": null, 00:27:20.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:20.279 "is_configured": false, 00:27:20.279 "data_offset": 0, 00:27:20.279 "data_size": 63488 00:27:20.279 }, 00:27:20.279 { 00:27:20.279 "name": "BaseBdev2", 00:27:20.279 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:20.279 "is_configured": true, 00:27:20.279 "data_offset": 2048, 00:27:20.279 "data_size": 63488 00:27:20.279 } 00:27:20.279 ] 00:27:20.279 }' 00:27:20.279 13:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:20.538 13:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:20.538 13:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:20.538 13:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:20.538 13:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:20.538 13:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.538 13:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:20.538 [2024-11-20 13:47:23.278641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:20.538 13:47:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.538 13:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:27:20.538 [2024-11-20 13:47:23.358640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:27:20.538 [2024-11-20 13:47:23.361299] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:20.797 [2024-11-20 13:47:23.490098] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:20.797 [2024-11-20 13:47:23.490838] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:21.055 [2024-11-20 13:47:23.729332] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:21.055 [2024-11-20 13:47:23.729725] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:21.313 137.33 IOPS, 412.00 MiB/s [2024-11-20T13:47:24.230Z] [2024-11-20 13:47:24.072092] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:21.571 [2024-11-20 13:47:24.300433] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:21.571 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:21.571 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:21.571 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:21.571 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:21.571 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:21.571 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:21.571 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.571 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:21.571 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:21.571 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.571 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:21.571 "name": "raid_bdev1", 00:27:21.571 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:21.571 "strip_size_kb": 0, 00:27:21.571 "state": "online", 00:27:21.571 "raid_level": "raid1", 00:27:21.571 "superblock": true, 00:27:21.571 "num_base_bdevs": 2, 00:27:21.571 "num_base_bdevs_discovered": 2, 00:27:21.571 "num_base_bdevs_operational": 2, 00:27:21.571 "process": { 00:27:21.571 "type": "rebuild", 00:27:21.571 "target": "spare", 00:27:21.571 "progress": { 00:27:21.571 "blocks": 10240, 00:27:21.571 "percent": 16 00:27:21.571 } 00:27:21.571 }, 00:27:21.571 "base_bdevs_list": [ 00:27:21.571 { 00:27:21.571 "name": "spare", 00:27:21.571 "uuid": "48273152-f624-5ca7-a698-67359c5efcb3", 00:27:21.571 "is_configured": true, 00:27:21.571 "data_offset": 2048, 00:27:21.572 "data_size": 63488 00:27:21.572 }, 00:27:21.572 { 00:27:21.572 "name": "BaseBdev2", 00:27:21.572 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:21.572 "is_configured": true, 00:27:21.572 "data_offset": 2048, 00:27:21.572 "data_size": 63488 00:27:21.572 } 00:27:21.572 ] 00:27:21.572 }' 00:27:21.572 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:21.572 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:21.572 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:27:21.841 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=455 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:21.841 "name": "raid_bdev1", 00:27:21.841 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:21.841 "strip_size_kb": 0, 00:27:21.841 "state": "online", 00:27:21.841 "raid_level": "raid1", 00:27:21.841 "superblock": true, 00:27:21.841 "num_base_bdevs": 2, 00:27:21.841 "num_base_bdevs_discovered": 2, 00:27:21.841 "num_base_bdevs_operational": 2, 00:27:21.841 "process": { 00:27:21.841 "type": "rebuild", 00:27:21.841 "target": "spare", 00:27:21.841 "progress": { 00:27:21.841 "blocks": 12288, 00:27:21.841 "percent": 19 00:27:21.841 } 00:27:21.841 }, 00:27:21.841 "base_bdevs_list": [ 00:27:21.841 { 00:27:21.841 "name": "spare", 00:27:21.841 "uuid": "48273152-f624-5ca7-a698-67359c5efcb3", 00:27:21.841 "is_configured": true, 00:27:21.841 "data_offset": 2048, 00:27:21.841 "data_size": 63488 00:27:21.841 }, 00:27:21.841 { 00:27:21.841 "name": "BaseBdev2", 00:27:21.841 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:21.841 "is_configured": true, 00:27:21.841 "data_offset": 2048, 00:27:21.841 "data_size": 63488 00:27:21.841 } 00:27:21.841 ] 00:27:21.841 }' 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:21.841 [2024-11-20 13:47:24.662049] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:21.841 13:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:22.122 119.25 IOPS, 357.75 MiB/s [2024-11-20T13:47:25.039Z] [2024-11-20 13:47:25.027834] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:27:22.690 [2024-11-20 13:47:25.396166] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:27:22.949 [2024-11-20 13:47:25.626176] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:27:22.949 [2024-11-20 13:47:25.626874] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:27:22.949 13:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:22.949 13:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:22.949 13:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:22.949 13:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:22.949 13:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:22.949 13:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:22.949 13:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:22.949 13:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.949 13:47:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.949 13:47:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:22.949 13:47:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.949 13:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:22.949 "name": "raid_bdev1", 00:27:22.949 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:22.949 "strip_size_kb": 0, 00:27:22.949 "state": "online", 00:27:22.949 "raid_level": "raid1", 00:27:22.949 "superblock": true, 00:27:22.949 "num_base_bdevs": 2, 00:27:22.949 "num_base_bdevs_discovered": 2, 00:27:22.949 "num_base_bdevs_operational": 2, 00:27:22.949 "process": { 00:27:22.949 "type": "rebuild", 00:27:22.949 "target": "spare", 00:27:22.949 "progress": { 00:27:22.949 "blocks": 32768, 00:27:22.949 "percent": 51 00:27:22.949 } 00:27:22.949 }, 00:27:22.949 "base_bdevs_list": [ 00:27:22.949 { 00:27:22.949 "name": "spare", 00:27:22.949 "uuid": "48273152-f624-5ca7-a698-67359c5efcb3", 00:27:22.949 "is_configured": true, 00:27:22.949 "data_offset": 2048, 00:27:22.949 "data_size": 63488 00:27:22.949 }, 00:27:22.949 { 00:27:22.949 "name": "BaseBdev2", 00:27:22.949 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:22.949 "is_configured": true, 00:27:22.949 "data_offset": 2048, 00:27:22.949 "data_size": 63488 00:27:22.949 } 00:27:22.949 ] 00:27:22.949 }' 00:27:22.949 13:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:22.949 105.20 IOPS, 315.60 MiB/s [2024-11-20T13:47:25.866Z] 13:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:22.949 13:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:22.949 13:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:22.949 13:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:22.949 [2024-11-20 13:47:25.840652] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:27:23.517 [2024-11-20 13:47:26.180551] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:27:23.517 [2024-11-20 13:47:26.412686] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:27:23.775 [2024-11-20 13:47:26.634292] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:27:24.034 97.00 IOPS, 291.00 MiB/s [2024-11-20T13:47:26.951Z] 13:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:24.034 13:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:24.034 13:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:24.034 13:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:24.034 13:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:24.034 13:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:24.034 13:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:24.034 13:47:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.034 13:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.034 13:47:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:24.034 13:47:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.034 [2024-11-20 13:47:26.855817] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:27:24.034 13:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:24.034 "name": "raid_bdev1", 00:27:24.034 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:24.034 "strip_size_kb": 0, 00:27:24.034 "state": "online", 00:27:24.034 "raid_level": "raid1", 00:27:24.034 "superblock": true, 00:27:24.034 "num_base_bdevs": 2, 00:27:24.034 "num_base_bdevs_discovered": 2, 00:27:24.034 "num_base_bdevs_operational": 2, 00:27:24.034 "process": { 00:27:24.034 "type": "rebuild", 00:27:24.034 "target": "spare", 00:27:24.034 "progress": { 00:27:24.034 "blocks": 49152, 00:27:24.034 "percent": 77 00:27:24.034 } 00:27:24.034 }, 00:27:24.034 "base_bdevs_list": [ 00:27:24.034 { 00:27:24.034 "name": "spare", 00:27:24.034 "uuid": "48273152-f624-5ca7-a698-67359c5efcb3", 00:27:24.034 "is_configured": true, 00:27:24.034 "data_offset": 2048, 00:27:24.034 "data_size": 63488 00:27:24.034 }, 00:27:24.034 { 00:27:24.034 "name": "BaseBdev2", 00:27:24.034 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:24.034 "is_configured": true, 00:27:24.034 "data_offset": 2048, 00:27:24.034 "data_size": 63488 00:27:24.034 } 00:27:24.034 ] 00:27:24.034 }' 00:27:24.034 13:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:24.034 13:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:24.034 13:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:24.293 13:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:24.293 13:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:24.293 [2024-11-20 13:47:27.206948] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:27:24.551 [2024-11-20 13:47:27.317169] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:27:24.809 [2024-11-20 13:47:27.546105] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:24.809 [2024-11-20 13:47:27.654244] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:24.809 [2024-11-20 13:47:27.656953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:25.326 87.43 IOPS, 262.29 MiB/s [2024-11-20T13:47:28.243Z] 13:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:25.327 13:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:25.327 13:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:25.327 13:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:25.327 13:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:25.327 13:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:25.327 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.327 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.327 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:25.327 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.327 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.327 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:25.327 "name": "raid_bdev1", 00:27:25.327 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:25.327 "strip_size_kb": 0, 00:27:25.327 "state": "online", 00:27:25.327 "raid_level": "raid1", 00:27:25.327 "superblock": true, 00:27:25.327 "num_base_bdevs": 2, 00:27:25.327 "num_base_bdevs_discovered": 2, 00:27:25.327 "num_base_bdevs_operational": 2, 00:27:25.327 "base_bdevs_list": [ 00:27:25.327 { 00:27:25.327 "name": "spare", 00:27:25.327 "uuid": "48273152-f624-5ca7-a698-67359c5efcb3", 00:27:25.327 "is_configured": true, 00:27:25.327 "data_offset": 2048, 00:27:25.327 "data_size": 63488 00:27:25.327 }, 00:27:25.327 { 00:27:25.327 "name": "BaseBdev2", 00:27:25.327 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:25.327 "is_configured": true, 00:27:25.327 "data_offset": 2048, 00:27:25.327 "data_size": 63488 00:27:25.327 } 00:27:25.327 ] 00:27:25.327 }' 00:27:25.327 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:25.327 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:25.327 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:25.327 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:27:25.327 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:27:25.327 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:25.327 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:25.327 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:25.327 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:25.327 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:25.327 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.327 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.327 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:25.327 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.327 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.327 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:25.327 "name": "raid_bdev1", 00:27:25.327 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:25.327 "strip_size_kb": 0, 00:27:25.327 "state": "online", 00:27:25.327 "raid_level": "raid1", 00:27:25.327 "superblock": true, 00:27:25.327 "num_base_bdevs": 2, 00:27:25.327 "num_base_bdevs_discovered": 2, 00:27:25.327 "num_base_bdevs_operational": 2, 00:27:25.327 "base_bdevs_list": [ 00:27:25.327 { 00:27:25.327 "name": "spare", 00:27:25.327 "uuid": "48273152-f624-5ca7-a698-67359c5efcb3", 00:27:25.327 "is_configured": true, 00:27:25.327 "data_offset": 2048, 00:27:25.327 "data_size": 63488 00:27:25.327 }, 00:27:25.327 { 00:27:25.327 "name": "BaseBdev2", 00:27:25.327 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:25.327 "is_configured": true, 00:27:25.327 "data_offset": 2048, 00:27:25.327 "data_size": 63488 00:27:25.327 } 00:27:25.327 ] 00:27:25.327 }' 00:27:25.327 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:25.586 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:25.586 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:25.586 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:25.586 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:25.586 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:25.586 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:25.586 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:25.586 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:25.586 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:25.586 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:25.586 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:25.586 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:25.586 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:25.586 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.586 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.586 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.586 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:25.586 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.586 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:25.586 "name": "raid_bdev1", 00:27:25.586 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:25.586 "strip_size_kb": 0, 00:27:25.586 "state": "online", 00:27:25.586 "raid_level": "raid1", 00:27:25.586 "superblock": true, 00:27:25.586 "num_base_bdevs": 2, 00:27:25.586 "num_base_bdevs_discovered": 2, 00:27:25.586 "num_base_bdevs_operational": 2, 00:27:25.586 "base_bdevs_list": [ 00:27:25.586 { 00:27:25.586 "name": "spare", 00:27:25.586 "uuid": "48273152-f624-5ca7-a698-67359c5efcb3", 00:27:25.586 "is_configured": true, 00:27:25.586 "data_offset": 2048, 00:27:25.586 "data_size": 63488 00:27:25.586 }, 00:27:25.586 { 00:27:25.586 "name": "BaseBdev2", 00:27:25.586 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:25.586 "is_configured": true, 00:27:25.587 "data_offset": 2048, 00:27:25.587 "data_size": 63488 00:27:25.587 } 00:27:25.587 ] 00:27:25.587 }' 00:27:25.587 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:25.587 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:26.103 81.00 IOPS, 243.00 MiB/s [2024-11-20T13:47:29.020Z] 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:26.103 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.103 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:26.103 [2024-11-20 13:47:28.841968] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:26.103 [2024-11-20 13:47:28.842144] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:26.104 00:27:26.104 Latency(us) 00:27:26.104 [2024-11-20T13:47:29.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.104 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:27:26.104 raid_bdev1 : 8.16 80.04 240.13 0.00 0.00 16784.33 312.79 118679.74 00:27:26.104 [2024-11-20T13:47:29.021Z] =================================================================================================================== 00:27:26.104 [2024-11-20T13:47:29.021Z] Total : 80.04 240.13 0.00 0.00 16784.33 312.79 118679.74 00:27:26.104 { 00:27:26.104 "results": [ 00:27:26.104 { 00:27:26.104 "job": "raid_bdev1", 00:27:26.104 "core_mask": "0x1", 00:27:26.104 "workload": "randrw", 00:27:26.104 "percentage": 50, 00:27:26.104 "status": "finished", 00:27:26.104 "queue_depth": 2, 00:27:26.104 "io_size": 3145728, 00:27:26.104 "runtime": 8.15794, 00:27:26.104 "iops": 80.04471717124666, 00:27:26.104 "mibps": 240.13415151374, 00:27:26.104 "io_failed": 0, 00:27:26.104 "io_timeout": 0, 00:27:26.104 "avg_latency_us": 16784.33082556035, 00:27:26.104 "min_latency_us": 312.78545454545457, 00:27:26.104 "max_latency_us": 118679.73818181817 00:27:26.104 } 00:27:26.104 ], 00:27:26.104 "core_count": 1 00:27:26.104 } 00:27:26.104 [2024-11-20 13:47:28.918207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:26.104 [2024-11-20 13:47:28.918303] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:26.104 [2024-11-20 13:47:28.918416] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:26.104 [2024-11-20 13:47:28.918438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:26.104 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.104 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:27:26.104 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:26.104 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.104 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:26.104 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.104 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:27:26.104 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:27:26.104 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:27:26.104 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:27:26.104 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:26.104 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:27:26.104 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:26.104 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:26.104 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:26.104 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:27:26.104 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:26.104 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:26.104 13:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:27:26.362 /dev/nbd0 00:27:26.619 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:26.620 1+0 records in 00:27:26.620 1+0 records out 00:27:26.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399968 s, 10.2 MB/s 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:26.620 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:27:26.880 /dev/nbd1 00:27:26.880 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:26.880 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:26.880 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:27:26.880 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:27:26.880 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:26.880 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:26.880 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:27:26.880 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:27:26.880 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:26.880 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:26.880 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:26.880 1+0 records in 00:27:26.880 1+0 records out 00:27:26.880 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442362 s, 9.3 MB/s 00:27:26.880 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:26.880 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:27:26.880 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:26.880 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:26.880 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:27:26.880 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:26.880 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:26.880 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:27.139 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:27:27.139 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:27.139 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:27:27.139 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:27.139 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:27:27.139 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:27.139 13:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:27:27.397 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:27.397 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:27.397 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:27.397 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:27.397 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:27.398 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:27.398 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:27:27.398 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:27.398 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:27:27.398 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:27.398 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:27.398 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:27.398 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:27:27.398 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:27.398 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:27.656 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:27.656 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:27.656 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:27.656 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:27.656 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:27.656 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:27.656 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:27:27.656 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:27.656 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:27:27.656 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:27:27.656 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.656 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:27.656 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.656 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:27.656 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.656 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:27.656 [2024-11-20 13:47:30.414612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:27.656 [2024-11-20 13:47:30.414684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:27.656 [2024-11-20 13:47:30.414732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:27:27.656 [2024-11-20 13:47:30.414749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:27.656 [2024-11-20 13:47:30.417750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:27.656 [2024-11-20 13:47:30.417798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:27.656 [2024-11-20 13:47:30.417934] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:27.656 [2024-11-20 13:47:30.417999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:27.656 [2024-11-20 13:47:30.418183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:27.656 spare 00:27:27.656 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.656 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:27:27.657 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.657 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:27.657 [2024-11-20 13:47:30.518325] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:27:27.657 [2024-11-20 13:47:30.518383] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:27.657 [2024-11-20 13:47:30.518774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:27:27.657 [2024-11-20 13:47:30.519065] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:27:27.657 [2024-11-20 13:47:30.519094] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:27:27.657 [2024-11-20 13:47:30.519343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:27.657 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.657 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:27.657 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:27.657 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:27.657 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:27.657 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:27.657 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:27.657 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:27.657 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:27.657 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:27.657 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:27.657 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:27.657 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:27.657 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.657 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:27.657 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.916 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:27.916 "name": "raid_bdev1", 00:27:27.916 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:27.916 "strip_size_kb": 0, 00:27:27.916 "state": "online", 00:27:27.916 "raid_level": "raid1", 00:27:27.916 "superblock": true, 00:27:27.916 "num_base_bdevs": 2, 00:27:27.916 "num_base_bdevs_discovered": 2, 00:27:27.916 "num_base_bdevs_operational": 2, 00:27:27.916 "base_bdevs_list": [ 00:27:27.916 { 00:27:27.916 "name": "spare", 00:27:27.916 "uuid": "48273152-f624-5ca7-a698-67359c5efcb3", 00:27:27.916 "is_configured": true, 00:27:27.916 "data_offset": 2048, 00:27:27.916 "data_size": 63488 00:27:27.916 }, 00:27:27.916 { 00:27:27.916 "name": "BaseBdev2", 00:27:27.916 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:27.916 "is_configured": true, 00:27:27.916 "data_offset": 2048, 00:27:27.916 "data_size": 63488 00:27:27.916 } 00:27:27.916 ] 00:27:27.916 }' 00:27:27.916 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:27.916 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:28.175 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:28.175 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:28.175 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:28.175 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:28.175 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:28.175 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:28.175 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.175 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:28.175 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.175 13:47:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.175 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:28.175 "name": "raid_bdev1", 00:27:28.175 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:28.175 "strip_size_kb": 0, 00:27:28.175 "state": "online", 00:27:28.175 "raid_level": "raid1", 00:27:28.175 "superblock": true, 00:27:28.175 "num_base_bdevs": 2, 00:27:28.175 "num_base_bdevs_discovered": 2, 00:27:28.175 "num_base_bdevs_operational": 2, 00:27:28.175 "base_bdevs_list": [ 00:27:28.175 { 00:27:28.175 "name": "spare", 00:27:28.175 "uuid": "48273152-f624-5ca7-a698-67359c5efcb3", 00:27:28.175 "is_configured": true, 00:27:28.175 "data_offset": 2048, 00:27:28.175 "data_size": 63488 00:27:28.175 }, 00:27:28.175 { 00:27:28.175 "name": "BaseBdev2", 00:27:28.175 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:28.175 "is_configured": true, 00:27:28.175 "data_offset": 2048, 00:27:28.175 "data_size": 63488 00:27:28.175 } 00:27:28.175 ] 00:27:28.175 }' 00:27:28.175 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:28.175 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:28.175 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:28.434 [2024-11-20 13:47:31.187589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:28.434 "name": "raid_bdev1", 00:27:28.434 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:28.434 "strip_size_kb": 0, 00:27:28.434 "state": "online", 00:27:28.434 "raid_level": "raid1", 00:27:28.434 "superblock": true, 00:27:28.434 "num_base_bdevs": 2, 00:27:28.434 "num_base_bdevs_discovered": 1, 00:27:28.434 "num_base_bdevs_operational": 1, 00:27:28.434 "base_bdevs_list": [ 00:27:28.434 { 00:27:28.434 "name": null, 00:27:28.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:28.434 "is_configured": false, 00:27:28.434 "data_offset": 0, 00:27:28.434 "data_size": 63488 00:27:28.434 }, 00:27:28.434 { 00:27:28.434 "name": "BaseBdev2", 00:27:28.434 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:28.434 "is_configured": true, 00:27:28.434 "data_offset": 2048, 00:27:28.434 "data_size": 63488 00:27:28.434 } 00:27:28.434 ] 00:27:28.434 }' 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:28.434 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:29.003 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:29.003 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.003 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:29.003 [2024-11-20 13:47:31.691839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:29.003 [2024-11-20 13:47:31.692111] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:29.003 [2024-11-20 13:47:31.692138] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:29.003 [2024-11-20 13:47:31.692190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:29.003 [2024-11-20 13:47:31.708143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:27:29.003 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.003 13:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:27:29.003 [2024-11-20 13:47:31.710695] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:29.943 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:29.943 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:29.943 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:29.943 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:29.943 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:29.943 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.943 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.943 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.943 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:29.943 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.943 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:29.943 "name": "raid_bdev1", 00:27:29.943 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:29.943 "strip_size_kb": 0, 00:27:29.943 "state": "online", 00:27:29.943 "raid_level": "raid1", 00:27:29.943 "superblock": true, 00:27:29.943 "num_base_bdevs": 2, 00:27:29.943 "num_base_bdevs_discovered": 2, 00:27:29.943 "num_base_bdevs_operational": 2, 00:27:29.943 "process": { 00:27:29.943 "type": "rebuild", 00:27:29.943 "target": "spare", 00:27:29.943 "progress": { 00:27:29.943 "blocks": 20480, 00:27:29.943 "percent": 32 00:27:29.943 } 00:27:29.943 }, 00:27:29.943 "base_bdevs_list": [ 00:27:29.943 { 00:27:29.943 "name": "spare", 00:27:29.943 "uuid": "48273152-f624-5ca7-a698-67359c5efcb3", 00:27:29.943 "is_configured": true, 00:27:29.943 "data_offset": 2048, 00:27:29.943 "data_size": 63488 00:27:29.943 }, 00:27:29.943 { 00:27:29.943 "name": "BaseBdev2", 00:27:29.943 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:29.943 "is_configured": true, 00:27:29.943 "data_offset": 2048, 00:27:29.943 "data_size": 63488 00:27:29.943 } 00:27:29.943 ] 00:27:29.943 }' 00:27:29.943 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:29.943 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:29.943 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:30.202 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:30.202 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:27:30.202 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.202 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:30.202 [2024-11-20 13:47:32.884460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:30.202 [2024-11-20 13:47:32.920377] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:30.202 [2024-11-20 13:47:32.920483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:30.202 [2024-11-20 13:47:32.920509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:30.202 [2024-11-20 13:47:32.920524] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:30.202 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.202 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:30.202 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:30.202 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:30.202 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:30.202 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:30.202 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:30.202 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:30.202 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:30.202 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:30.202 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:30.203 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:30.203 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.203 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:30.203 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:30.203 13:47:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.203 13:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:30.203 "name": "raid_bdev1", 00:27:30.203 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:30.203 "strip_size_kb": 0, 00:27:30.203 "state": "online", 00:27:30.203 "raid_level": "raid1", 00:27:30.203 "superblock": true, 00:27:30.203 "num_base_bdevs": 2, 00:27:30.203 "num_base_bdevs_discovered": 1, 00:27:30.203 "num_base_bdevs_operational": 1, 00:27:30.203 "base_bdevs_list": [ 00:27:30.203 { 00:27:30.203 "name": null, 00:27:30.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:30.203 "is_configured": false, 00:27:30.203 "data_offset": 0, 00:27:30.203 "data_size": 63488 00:27:30.203 }, 00:27:30.203 { 00:27:30.203 "name": "BaseBdev2", 00:27:30.203 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:30.203 "is_configured": true, 00:27:30.203 "data_offset": 2048, 00:27:30.203 "data_size": 63488 00:27:30.203 } 00:27:30.203 ] 00:27:30.203 }' 00:27:30.203 13:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:30.203 13:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:30.770 13:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:30.770 13:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.770 13:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:30.770 [2024-11-20 13:47:33.476262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:30.770 [2024-11-20 13:47:33.476458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:30.770 [2024-11-20 13:47:33.476524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:27:30.770 [2024-11-20 13:47:33.476578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:30.770 [2024-11-20 13:47:33.477524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:30.770 [2024-11-20 13:47:33.477578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:30.770 [2024-11-20 13:47:33.477735] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:30.770 [2024-11-20 13:47:33.477763] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:30.770 [2024-11-20 13:47:33.477779] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:30.770 [2024-11-20 13:47:33.477815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:30.770 [2024-11-20 13:47:33.499199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:27:30.770 spare 00:27:30.770 13:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.770 13:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:27:30.770 [2024-11-20 13:47:33.503082] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:31.820 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:31.820 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:31.820 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:31.820 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:31.820 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:31.820 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:31.820 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.820 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:31.820 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.820 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.820 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:31.820 "name": "raid_bdev1", 00:27:31.820 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:31.820 "strip_size_kb": 0, 00:27:31.820 "state": "online", 00:27:31.820 "raid_level": "raid1", 00:27:31.820 "superblock": true, 00:27:31.820 "num_base_bdevs": 2, 00:27:31.820 "num_base_bdevs_discovered": 2, 00:27:31.820 "num_base_bdevs_operational": 2, 00:27:31.820 "process": { 00:27:31.820 "type": "rebuild", 00:27:31.820 "target": "spare", 00:27:31.820 "progress": { 00:27:31.820 "blocks": 18432, 00:27:31.820 "percent": 29 00:27:31.820 } 00:27:31.820 }, 00:27:31.820 "base_bdevs_list": [ 00:27:31.820 { 00:27:31.820 "name": "spare", 00:27:31.820 "uuid": "48273152-f624-5ca7-a698-67359c5efcb3", 00:27:31.820 "is_configured": true, 00:27:31.820 "data_offset": 2048, 00:27:31.820 "data_size": 63488 00:27:31.820 }, 00:27:31.820 { 00:27:31.820 "name": "BaseBdev2", 00:27:31.820 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:31.820 "is_configured": true, 00:27:31.820 "data_offset": 2048, 00:27:31.820 "data_size": 63488 00:27:31.820 } 00:27:31.820 ] 00:27:31.820 }' 00:27:31.820 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:31.820 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:31.820 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:31.820 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:31.820 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:27:31.820 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.820 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:31.820 [2024-11-20 13:47:34.661461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:31.820 [2024-11-20 13:47:34.717359] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:31.820 [2024-11-20 13:47:34.717493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:31.820 [2024-11-20 13:47:34.717525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:31.820 [2024-11-20 13:47:34.717538] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:32.078 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.078 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:32.078 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:32.078 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:32.078 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:32.078 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:32.078 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:32.078 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:32.079 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:32.079 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:32.079 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:32.079 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:32.079 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.079 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.079 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:32.079 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.079 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:32.079 "name": "raid_bdev1", 00:27:32.079 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:32.079 "strip_size_kb": 0, 00:27:32.079 "state": "online", 00:27:32.079 "raid_level": "raid1", 00:27:32.079 "superblock": true, 00:27:32.079 "num_base_bdevs": 2, 00:27:32.079 "num_base_bdevs_discovered": 1, 00:27:32.079 "num_base_bdevs_operational": 1, 00:27:32.079 "base_bdevs_list": [ 00:27:32.079 { 00:27:32.079 "name": null, 00:27:32.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:32.079 "is_configured": false, 00:27:32.079 "data_offset": 0, 00:27:32.079 "data_size": 63488 00:27:32.079 }, 00:27:32.079 { 00:27:32.079 "name": "BaseBdev2", 00:27:32.079 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:32.079 "is_configured": true, 00:27:32.079 "data_offset": 2048, 00:27:32.079 "data_size": 63488 00:27:32.079 } 00:27:32.079 ] 00:27:32.079 }' 00:27:32.079 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:32.079 13:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:32.645 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:32.645 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:32.645 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:32.645 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:32.645 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:32.645 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:32.645 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.646 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.646 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:32.646 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.646 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:32.646 "name": "raid_bdev1", 00:27:32.646 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:32.646 "strip_size_kb": 0, 00:27:32.646 "state": "online", 00:27:32.646 "raid_level": "raid1", 00:27:32.646 "superblock": true, 00:27:32.646 "num_base_bdevs": 2, 00:27:32.646 "num_base_bdevs_discovered": 1, 00:27:32.646 "num_base_bdevs_operational": 1, 00:27:32.646 "base_bdevs_list": [ 00:27:32.646 { 00:27:32.646 "name": null, 00:27:32.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:32.646 "is_configured": false, 00:27:32.646 "data_offset": 0, 00:27:32.646 "data_size": 63488 00:27:32.646 }, 00:27:32.646 { 00:27:32.646 "name": "BaseBdev2", 00:27:32.646 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:32.646 "is_configured": true, 00:27:32.646 "data_offset": 2048, 00:27:32.646 "data_size": 63488 00:27:32.646 } 00:27:32.646 ] 00:27:32.646 }' 00:27:32.646 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:32.646 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:32.646 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:32.646 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:32.646 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:27:32.646 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.646 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:32.646 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.646 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:32.646 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.646 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:32.646 [2024-11-20 13:47:35.464020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:32.646 [2024-11-20 13:47:35.464141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:32.646 [2024-11-20 13:47:35.464199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:27:32.646 [2024-11-20 13:47:35.464222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:32.646 [2024-11-20 13:47:35.465042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:32.646 [2024-11-20 13:47:35.465089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:32.646 [2024-11-20 13:47:35.465246] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:32.646 [2024-11-20 13:47:35.465275] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:32.646 [2024-11-20 13:47:35.465294] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:32.646 [2024-11-20 13:47:35.465313] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:27:32.646 BaseBdev1 00:27:32.646 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.646 13:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:27:33.580 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:33.580 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:33.580 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:33.580 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:33.580 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:33.580 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:33.580 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:33.580 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:33.580 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:33.580 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:33.580 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:33.580 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.580 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:33.580 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.580 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.838 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:33.838 "name": "raid_bdev1", 00:27:33.838 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:33.838 "strip_size_kb": 0, 00:27:33.838 "state": "online", 00:27:33.838 "raid_level": "raid1", 00:27:33.838 "superblock": true, 00:27:33.838 "num_base_bdevs": 2, 00:27:33.838 "num_base_bdevs_discovered": 1, 00:27:33.838 "num_base_bdevs_operational": 1, 00:27:33.838 "base_bdevs_list": [ 00:27:33.838 { 00:27:33.838 "name": null, 00:27:33.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:33.838 "is_configured": false, 00:27:33.838 "data_offset": 0, 00:27:33.838 "data_size": 63488 00:27:33.838 }, 00:27:33.838 { 00:27:33.838 "name": "BaseBdev2", 00:27:33.838 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:33.838 "is_configured": true, 00:27:33.838 "data_offset": 2048, 00:27:33.838 "data_size": 63488 00:27:33.838 } 00:27:33.838 ] 00:27:33.838 }' 00:27:33.838 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:33.838 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:34.096 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:34.096 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:34.096 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:34.096 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:34.096 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:34.096 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:34.096 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.096 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:34.096 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:34.096 13:47:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.355 13:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:34.355 "name": "raid_bdev1", 00:27:34.355 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:34.355 "strip_size_kb": 0, 00:27:34.355 "state": "online", 00:27:34.355 "raid_level": "raid1", 00:27:34.355 "superblock": true, 00:27:34.355 "num_base_bdevs": 2, 00:27:34.355 "num_base_bdevs_discovered": 1, 00:27:34.355 "num_base_bdevs_operational": 1, 00:27:34.355 "base_bdevs_list": [ 00:27:34.355 { 00:27:34.355 "name": null, 00:27:34.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:34.355 "is_configured": false, 00:27:34.355 "data_offset": 0, 00:27:34.355 "data_size": 63488 00:27:34.355 }, 00:27:34.355 { 00:27:34.355 "name": "BaseBdev2", 00:27:34.355 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:34.355 "is_configured": true, 00:27:34.355 "data_offset": 2048, 00:27:34.355 "data_size": 63488 00:27:34.355 } 00:27:34.355 ] 00:27:34.355 }' 00:27:34.355 13:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:34.355 13:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:34.355 13:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:34.355 13:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:34.355 13:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:34.355 13:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:27:34.355 13:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:34.355 13:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:34.355 13:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:34.355 13:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:34.355 13:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:34.355 13:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:34.355 13:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.355 13:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:34.355 [2024-11-20 13:47:37.141165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:34.355 [2024-11-20 13:47:37.141388] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:34.355 [2024-11-20 13:47:37.141415] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:34.355 request: 00:27:34.355 { 00:27:34.355 "base_bdev": "BaseBdev1", 00:27:34.355 "raid_bdev": "raid_bdev1", 00:27:34.355 "method": "bdev_raid_add_base_bdev", 00:27:34.355 "req_id": 1 00:27:34.355 } 00:27:34.355 Got JSON-RPC error response 00:27:34.355 response: 00:27:34.355 { 00:27:34.355 "code": -22, 00:27:34.355 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:34.355 } 00:27:34.355 13:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:34.355 13:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:27:34.355 13:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:34.355 13:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:34.355 13:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:34.355 13:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:27:35.300 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:35.300 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:35.300 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:35.300 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:35.300 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:35.300 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:35.300 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:35.300 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:35.300 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:35.300 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:35.300 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:35.300 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.300 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.300 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:35.300 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.571 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:35.571 "name": "raid_bdev1", 00:27:35.571 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:35.571 "strip_size_kb": 0, 00:27:35.571 "state": "online", 00:27:35.571 "raid_level": "raid1", 00:27:35.571 "superblock": true, 00:27:35.571 "num_base_bdevs": 2, 00:27:35.571 "num_base_bdevs_discovered": 1, 00:27:35.571 "num_base_bdevs_operational": 1, 00:27:35.571 "base_bdevs_list": [ 00:27:35.571 { 00:27:35.571 "name": null, 00:27:35.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:35.571 "is_configured": false, 00:27:35.571 "data_offset": 0, 00:27:35.571 "data_size": 63488 00:27:35.571 }, 00:27:35.571 { 00:27:35.571 "name": "BaseBdev2", 00:27:35.571 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:35.571 "is_configured": true, 00:27:35.571 "data_offset": 2048, 00:27:35.571 "data_size": 63488 00:27:35.571 } 00:27:35.571 ] 00:27:35.571 }' 00:27:35.571 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:35.571 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:35.830 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:35.830 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:35.830 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:35.830 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:35.830 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:35.830 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:35.830 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.830 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:35.830 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.830 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.830 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:35.830 "name": "raid_bdev1", 00:27:35.830 "uuid": "2b5cd516-41d2-4d26-b615-7ea2c1372687", 00:27:35.830 "strip_size_kb": 0, 00:27:35.830 "state": "online", 00:27:35.830 "raid_level": "raid1", 00:27:35.830 "superblock": true, 00:27:35.830 "num_base_bdevs": 2, 00:27:35.830 "num_base_bdevs_discovered": 1, 00:27:35.830 "num_base_bdevs_operational": 1, 00:27:35.830 "base_bdevs_list": [ 00:27:35.830 { 00:27:35.830 "name": null, 00:27:35.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:35.830 "is_configured": false, 00:27:35.830 "data_offset": 0, 00:27:35.830 "data_size": 63488 00:27:35.830 }, 00:27:35.830 { 00:27:35.830 "name": "BaseBdev2", 00:27:35.830 "uuid": "ee36b414-679d-5350-bf85-ec2f68da8120", 00:27:35.830 "is_configured": true, 00:27:35.830 "data_offset": 2048, 00:27:35.830 "data_size": 63488 00:27:35.830 } 00:27:35.830 ] 00:27:35.830 }' 00:27:35.830 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:36.088 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:36.089 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:36.089 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:36.089 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77297 00:27:36.089 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77297 ']' 00:27:36.089 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77297 00:27:36.089 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:27:36.089 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:36.089 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77297 00:27:36.089 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:36.089 killing process with pid 77297 00:27:36.089 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:36.089 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77297' 00:27:36.089 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77297 00:27:36.089 Received shutdown signal, test time was about 18.111587 seconds 00:27:36.089 00:27:36.089 Latency(us) 00:27:36.089 [2024-11-20T13:47:39.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.089 [2024-11-20T13:47:39.006Z] =================================================================================================================== 00:27:36.089 [2024-11-20T13:47:39.006Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:36.089 13:47:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77297 00:27:36.089 [2024-11-20 13:47:38.851506] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:36.089 [2024-11-20 13:47:38.851696] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:36.089 [2024-11-20 13:47:38.851773] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:36.089 [2024-11-20 13:47:38.851807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:27:36.347 [2024-11-20 13:47:39.068423] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:37.281 13:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:27:37.281 00:27:37.281 real 0m21.452s 00:27:37.281 user 0m29.025s 00:27:37.281 sys 0m2.010s 00:27:37.281 13:47:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:37.281 ************************************ 00:27:37.281 END TEST raid_rebuild_test_sb_io 00:27:37.281 13:47:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:37.281 ************************************ 00:27:37.539 13:47:40 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:27:37.539 13:47:40 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:27:37.539 13:47:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:27:37.539 13:47:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:37.539 13:47:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:37.539 ************************************ 00:27:37.539 START TEST raid_rebuild_test 00:27:37.539 ************************************ 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77998 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77998 00:27:37.539 13:47:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:37.540 13:47:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77998 ']' 00:27:37.540 13:47:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.540 13:47:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:37.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.540 13:47:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.540 13:47:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:37.540 13:47:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:37.540 [2024-11-20 13:47:40.391117] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:27:37.540 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:37.540 Zero copy mechanism will not be used. 00:27:37.540 [2024-11-20 13:47:40.391410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77998 ] 00:27:37.797 [2024-11-20 13:47:40.581767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.053 [2024-11-20 13:47:40.713656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.053 [2024-11-20 13:47:40.916548] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:38.053 [2024-11-20 13:47:40.916647] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.619 BaseBdev1_malloc 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.619 [2024-11-20 13:47:41.444684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:38.619 [2024-11-20 13:47:41.444804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:38.619 [2024-11-20 13:47:41.444837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:38.619 [2024-11-20 13:47:41.444855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:38.619 [2024-11-20 13:47:41.447782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:38.619 [2024-11-20 13:47:41.447834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:38.619 BaseBdev1 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.619 BaseBdev2_malloc 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.619 [2024-11-20 13:47:41.497696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:38.619 [2024-11-20 13:47:41.497780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:38.619 [2024-11-20 13:47:41.497816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:38.619 [2024-11-20 13:47:41.497833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:38.619 [2024-11-20 13:47:41.500689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:38.619 [2024-11-20 13:47:41.500739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:38.619 BaseBdev2 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.619 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.878 BaseBdev3_malloc 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.878 [2024-11-20 13:47:41.565012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:27:38.878 [2024-11-20 13:47:41.565092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:38.878 [2024-11-20 13:47:41.565125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:38.878 [2024-11-20 13:47:41.565143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:38.878 [2024-11-20 13:47:41.567998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:38.878 [2024-11-20 13:47:41.568050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:38.878 BaseBdev3 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.878 BaseBdev4_malloc 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.878 [2024-11-20 13:47:41.617247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:27:38.878 [2024-11-20 13:47:41.617331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:38.878 [2024-11-20 13:47:41.617363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:27:38.878 [2024-11-20 13:47:41.617380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:38.878 [2024-11-20 13:47:41.620126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:38.878 [2024-11-20 13:47:41.620182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:38.878 BaseBdev4 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.878 spare_malloc 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.878 spare_delay 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.878 [2024-11-20 13:47:41.677632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:38.878 [2024-11-20 13:47:41.677699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:38.878 [2024-11-20 13:47:41.677727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:27:38.878 [2024-11-20 13:47:41.677745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:38.878 [2024-11-20 13:47:41.680507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:38.878 [2024-11-20 13:47:41.680558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:38.878 spare 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.878 [2024-11-20 13:47:41.685687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:38.878 [2024-11-20 13:47:41.688072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:38.878 [2024-11-20 13:47:41.688167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:38.878 [2024-11-20 13:47:41.688252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:38.878 [2024-11-20 13:47:41.688372] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:38.878 [2024-11-20 13:47:41.688401] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:38.878 [2024-11-20 13:47:41.688741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:27:38.878 [2024-11-20 13:47:41.688994] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:38.878 [2024-11-20 13:47:41.689023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:38.878 [2024-11-20 13:47:41.689217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.878 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:38.879 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:38.879 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.879 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:38.879 "name": "raid_bdev1", 00:27:38.879 "uuid": "1de4f12a-7cd0-42e3-83af-58b415ea466e", 00:27:38.879 "strip_size_kb": 0, 00:27:38.879 "state": "online", 00:27:38.879 "raid_level": "raid1", 00:27:38.879 "superblock": false, 00:27:38.879 "num_base_bdevs": 4, 00:27:38.879 "num_base_bdevs_discovered": 4, 00:27:38.879 "num_base_bdevs_operational": 4, 00:27:38.879 "base_bdevs_list": [ 00:27:38.879 { 00:27:38.879 "name": "BaseBdev1", 00:27:38.879 "uuid": "0479cd60-df59-54f8-82b1-1f83908bdd2f", 00:27:38.879 "is_configured": true, 00:27:38.879 "data_offset": 0, 00:27:38.879 "data_size": 65536 00:27:38.879 }, 00:27:38.879 { 00:27:38.879 "name": "BaseBdev2", 00:27:38.879 "uuid": "10a0173a-f27e-570d-8dba-9eea60d8942b", 00:27:38.879 "is_configured": true, 00:27:38.879 "data_offset": 0, 00:27:38.879 "data_size": 65536 00:27:38.879 }, 00:27:38.879 { 00:27:38.879 "name": "BaseBdev3", 00:27:38.879 "uuid": "6d24ad34-9a1e-5ee1-936d-c46b61a7e60f", 00:27:38.879 "is_configured": true, 00:27:38.879 "data_offset": 0, 00:27:38.879 "data_size": 65536 00:27:38.879 }, 00:27:38.879 { 00:27:38.879 "name": "BaseBdev4", 00:27:38.879 "uuid": "135df408-f536-50ed-864f-1ae8169019fe", 00:27:38.879 "is_configured": true, 00:27:38.879 "data_offset": 0, 00:27:38.879 "data_size": 65536 00:27:38.879 } 00:27:38.879 ] 00:27:38.879 }' 00:27:38.879 13:47:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:38.879 13:47:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.444 13:47:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:27:39.444 13:47:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:39.444 13:47:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.444 13:47:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.444 [2024-11-20 13:47:42.138255] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:39.444 13:47:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.444 13:47:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:27:39.444 13:47:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:39.444 13:47:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.444 13:47:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:39.444 13:47:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:39.444 13:47:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.444 13:47:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:27:39.444 13:47:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:27:39.444 13:47:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:27:39.444 13:47:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:27:39.444 13:47:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:27:39.444 13:47:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:39.444 13:47:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:39.445 13:47:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:39.445 13:47:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:39.445 13:47:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:39.445 13:47:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:27:39.445 13:47:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:39.445 13:47:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:39.445 13:47:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:39.702 [2024-11-20 13:47:42.457992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:27:39.702 /dev/nbd0 00:27:39.702 13:47:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:39.702 13:47:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:39.702 13:47:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:39.702 13:47:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:27:39.702 13:47:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:39.702 13:47:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:39.702 13:47:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:39.702 13:47:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:27:39.702 13:47:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:39.702 13:47:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:39.702 13:47:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:39.702 1+0 records in 00:27:39.702 1+0 records out 00:27:39.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356593 s, 11.5 MB/s 00:27:39.702 13:47:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:39.702 13:47:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:27:39.702 13:47:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:39.702 13:47:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:39.702 13:47:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:27:39.702 13:47:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:39.702 13:47:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:39.702 13:47:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:27:39.702 13:47:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:27:39.702 13:47:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:27:49.675 65536+0 records in 00:27:49.675 65536+0 records out 00:27:49.675 33554432 bytes (34 MB, 32 MiB) copied, 8.72007 s, 3.8 MB/s 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:49.675 [2024-11-20 13:47:51.510539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.675 [2024-11-20 13:47:51.522665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:49.675 "name": "raid_bdev1", 00:27:49.675 "uuid": "1de4f12a-7cd0-42e3-83af-58b415ea466e", 00:27:49.675 "strip_size_kb": 0, 00:27:49.675 "state": "online", 00:27:49.675 "raid_level": "raid1", 00:27:49.675 "superblock": false, 00:27:49.675 "num_base_bdevs": 4, 00:27:49.675 "num_base_bdevs_discovered": 3, 00:27:49.675 "num_base_bdevs_operational": 3, 00:27:49.675 "base_bdevs_list": [ 00:27:49.675 { 00:27:49.675 "name": null, 00:27:49.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:49.675 "is_configured": false, 00:27:49.675 "data_offset": 0, 00:27:49.675 "data_size": 65536 00:27:49.675 }, 00:27:49.675 { 00:27:49.675 "name": "BaseBdev2", 00:27:49.675 "uuid": "10a0173a-f27e-570d-8dba-9eea60d8942b", 00:27:49.675 "is_configured": true, 00:27:49.675 "data_offset": 0, 00:27:49.675 "data_size": 65536 00:27:49.675 }, 00:27:49.675 { 00:27:49.675 "name": "BaseBdev3", 00:27:49.675 "uuid": "6d24ad34-9a1e-5ee1-936d-c46b61a7e60f", 00:27:49.675 "is_configured": true, 00:27:49.675 "data_offset": 0, 00:27:49.675 "data_size": 65536 00:27:49.675 }, 00:27:49.675 { 00:27:49.675 "name": "BaseBdev4", 00:27:49.675 "uuid": "135df408-f536-50ed-864f-1ae8169019fe", 00:27:49.675 "is_configured": true, 00:27:49.675 "data_offset": 0, 00:27:49.675 "data_size": 65536 00:27:49.675 } 00:27:49.675 ] 00:27:49.675 }' 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:49.675 13:47:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.675 13:47:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:49.675 13:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.675 13:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.675 [2024-11-20 13:47:52.034792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:49.675 [2024-11-20 13:47:52.049329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:27:49.675 13:47:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.675 13:47:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:27:49.676 [2024-11-20 13:47:52.052104] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:50.244 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:50.244 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:50.244 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:50.244 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:50.244 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:50.244 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:50.244 13:47:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.244 13:47:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.244 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.244 13:47:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.244 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:50.244 "name": "raid_bdev1", 00:27:50.244 "uuid": "1de4f12a-7cd0-42e3-83af-58b415ea466e", 00:27:50.244 "strip_size_kb": 0, 00:27:50.244 "state": "online", 00:27:50.244 "raid_level": "raid1", 00:27:50.244 "superblock": false, 00:27:50.244 "num_base_bdevs": 4, 00:27:50.244 "num_base_bdevs_discovered": 4, 00:27:50.244 "num_base_bdevs_operational": 4, 00:27:50.244 "process": { 00:27:50.244 "type": "rebuild", 00:27:50.244 "target": "spare", 00:27:50.244 "progress": { 00:27:50.244 "blocks": 20480, 00:27:50.244 "percent": 31 00:27:50.244 } 00:27:50.244 }, 00:27:50.244 "base_bdevs_list": [ 00:27:50.244 { 00:27:50.244 "name": "spare", 00:27:50.244 "uuid": "47ffce25-7b94-5a6f-a706-acf804ef3325", 00:27:50.244 "is_configured": true, 00:27:50.244 "data_offset": 0, 00:27:50.244 "data_size": 65536 00:27:50.244 }, 00:27:50.244 { 00:27:50.244 "name": "BaseBdev2", 00:27:50.244 "uuid": "10a0173a-f27e-570d-8dba-9eea60d8942b", 00:27:50.244 "is_configured": true, 00:27:50.244 "data_offset": 0, 00:27:50.244 "data_size": 65536 00:27:50.244 }, 00:27:50.244 { 00:27:50.244 "name": "BaseBdev3", 00:27:50.244 "uuid": "6d24ad34-9a1e-5ee1-936d-c46b61a7e60f", 00:27:50.244 "is_configured": true, 00:27:50.244 "data_offset": 0, 00:27:50.244 "data_size": 65536 00:27:50.244 }, 00:27:50.244 { 00:27:50.244 "name": "BaseBdev4", 00:27:50.244 "uuid": "135df408-f536-50ed-864f-1ae8169019fe", 00:27:50.244 "is_configured": true, 00:27:50.244 "data_offset": 0, 00:27:50.244 "data_size": 65536 00:27:50.244 } 00:27:50.244 ] 00:27:50.244 }' 00:27:50.244 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:50.244 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:50.244 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:50.503 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:50.503 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:27:50.503 13:47:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.503 13:47:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.503 [2024-11-20 13:47:53.201490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:50.503 [2024-11-20 13:47:53.261514] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:50.503 [2024-11-20 13:47:53.261609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:50.503 [2024-11-20 13:47:53.261636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:50.503 [2024-11-20 13:47:53.261659] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:50.503 13:47:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.503 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:50.503 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:50.503 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:50.503 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:50.503 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:50.503 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:50.503 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:50.503 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:50.503 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:50.503 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:50.503 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:50.503 13:47:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.503 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.503 13:47:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.503 13:47:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.503 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:50.503 "name": "raid_bdev1", 00:27:50.503 "uuid": "1de4f12a-7cd0-42e3-83af-58b415ea466e", 00:27:50.503 "strip_size_kb": 0, 00:27:50.503 "state": "online", 00:27:50.503 "raid_level": "raid1", 00:27:50.503 "superblock": false, 00:27:50.503 "num_base_bdevs": 4, 00:27:50.503 "num_base_bdevs_discovered": 3, 00:27:50.503 "num_base_bdevs_operational": 3, 00:27:50.503 "base_bdevs_list": [ 00:27:50.503 { 00:27:50.503 "name": null, 00:27:50.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.503 "is_configured": false, 00:27:50.503 "data_offset": 0, 00:27:50.503 "data_size": 65536 00:27:50.503 }, 00:27:50.503 { 00:27:50.503 "name": "BaseBdev2", 00:27:50.503 "uuid": "10a0173a-f27e-570d-8dba-9eea60d8942b", 00:27:50.503 "is_configured": true, 00:27:50.503 "data_offset": 0, 00:27:50.503 "data_size": 65536 00:27:50.503 }, 00:27:50.503 { 00:27:50.503 "name": "BaseBdev3", 00:27:50.503 "uuid": "6d24ad34-9a1e-5ee1-936d-c46b61a7e60f", 00:27:50.503 "is_configured": true, 00:27:50.503 "data_offset": 0, 00:27:50.504 "data_size": 65536 00:27:50.504 }, 00:27:50.504 { 00:27:50.504 "name": "BaseBdev4", 00:27:50.504 "uuid": "135df408-f536-50ed-864f-1ae8169019fe", 00:27:50.504 "is_configured": true, 00:27:50.504 "data_offset": 0, 00:27:50.504 "data_size": 65536 00:27:50.504 } 00:27:50.504 ] 00:27:50.504 }' 00:27:50.504 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:50.504 13:47:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.070 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:51.070 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:51.070 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:51.070 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:51.071 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:51.071 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:51.071 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:51.071 13:47:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.071 13:47:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.071 13:47:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.071 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:51.071 "name": "raid_bdev1", 00:27:51.071 "uuid": "1de4f12a-7cd0-42e3-83af-58b415ea466e", 00:27:51.071 "strip_size_kb": 0, 00:27:51.071 "state": "online", 00:27:51.071 "raid_level": "raid1", 00:27:51.071 "superblock": false, 00:27:51.071 "num_base_bdevs": 4, 00:27:51.071 "num_base_bdevs_discovered": 3, 00:27:51.071 "num_base_bdevs_operational": 3, 00:27:51.071 "base_bdevs_list": [ 00:27:51.071 { 00:27:51.071 "name": null, 00:27:51.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:51.071 "is_configured": false, 00:27:51.071 "data_offset": 0, 00:27:51.071 "data_size": 65536 00:27:51.071 }, 00:27:51.071 { 00:27:51.071 "name": "BaseBdev2", 00:27:51.071 "uuid": "10a0173a-f27e-570d-8dba-9eea60d8942b", 00:27:51.071 "is_configured": true, 00:27:51.071 "data_offset": 0, 00:27:51.071 "data_size": 65536 00:27:51.071 }, 00:27:51.071 { 00:27:51.071 "name": "BaseBdev3", 00:27:51.071 "uuid": "6d24ad34-9a1e-5ee1-936d-c46b61a7e60f", 00:27:51.071 "is_configured": true, 00:27:51.071 "data_offset": 0, 00:27:51.071 "data_size": 65536 00:27:51.071 }, 00:27:51.071 { 00:27:51.071 "name": "BaseBdev4", 00:27:51.071 "uuid": "135df408-f536-50ed-864f-1ae8169019fe", 00:27:51.071 "is_configured": true, 00:27:51.071 "data_offset": 0, 00:27:51.071 "data_size": 65536 00:27:51.071 } 00:27:51.071 ] 00:27:51.071 }' 00:27:51.071 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:51.071 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:51.071 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:51.329 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:51.329 13:47:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:27:51.329 13:47:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.329 13:47:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.329 [2024-11-20 13:47:53.993631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:51.329 [2024-11-20 13:47:54.007189] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:27:51.329 13:47:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.329 13:47:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:27:51.329 [2024-11-20 13:47:54.009824] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:52.263 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:52.263 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:52.263 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:52.263 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:52.264 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:52.264 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:52.264 13:47:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.264 13:47:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.264 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:52.264 13:47:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.264 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:52.264 "name": "raid_bdev1", 00:27:52.264 "uuid": "1de4f12a-7cd0-42e3-83af-58b415ea466e", 00:27:52.264 "strip_size_kb": 0, 00:27:52.264 "state": "online", 00:27:52.264 "raid_level": "raid1", 00:27:52.264 "superblock": false, 00:27:52.264 "num_base_bdevs": 4, 00:27:52.264 "num_base_bdevs_discovered": 4, 00:27:52.264 "num_base_bdevs_operational": 4, 00:27:52.264 "process": { 00:27:52.264 "type": "rebuild", 00:27:52.264 "target": "spare", 00:27:52.264 "progress": { 00:27:52.264 "blocks": 20480, 00:27:52.264 "percent": 31 00:27:52.264 } 00:27:52.264 }, 00:27:52.264 "base_bdevs_list": [ 00:27:52.264 { 00:27:52.264 "name": "spare", 00:27:52.264 "uuid": "47ffce25-7b94-5a6f-a706-acf804ef3325", 00:27:52.264 "is_configured": true, 00:27:52.264 "data_offset": 0, 00:27:52.264 "data_size": 65536 00:27:52.264 }, 00:27:52.264 { 00:27:52.264 "name": "BaseBdev2", 00:27:52.264 "uuid": "10a0173a-f27e-570d-8dba-9eea60d8942b", 00:27:52.264 "is_configured": true, 00:27:52.264 "data_offset": 0, 00:27:52.264 "data_size": 65536 00:27:52.264 }, 00:27:52.264 { 00:27:52.264 "name": "BaseBdev3", 00:27:52.264 "uuid": "6d24ad34-9a1e-5ee1-936d-c46b61a7e60f", 00:27:52.264 "is_configured": true, 00:27:52.264 "data_offset": 0, 00:27:52.264 "data_size": 65536 00:27:52.264 }, 00:27:52.264 { 00:27:52.264 "name": "BaseBdev4", 00:27:52.264 "uuid": "135df408-f536-50ed-864f-1ae8169019fe", 00:27:52.264 "is_configured": true, 00:27:52.264 "data_offset": 0, 00:27:52.264 "data_size": 65536 00:27:52.264 } 00:27:52.264 ] 00:27:52.264 }' 00:27:52.264 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:52.264 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:52.264 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:52.264 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:52.264 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:27:52.264 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:27:52.264 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:27:52.264 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:27:52.264 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:27:52.264 13:47:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.264 13:47:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.264 [2024-11-20 13:47:55.170883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:52.522 [2024-11-20 13:47:55.218885] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:27:52.522 13:47:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.522 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:27:52.522 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:27:52.522 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:52.522 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:52.522 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:52.522 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:52.522 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:52.522 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:52.522 13:47:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.522 13:47:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.523 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:52.523 13:47:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.523 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:52.523 "name": "raid_bdev1", 00:27:52.523 "uuid": "1de4f12a-7cd0-42e3-83af-58b415ea466e", 00:27:52.523 "strip_size_kb": 0, 00:27:52.523 "state": "online", 00:27:52.523 "raid_level": "raid1", 00:27:52.523 "superblock": false, 00:27:52.523 "num_base_bdevs": 4, 00:27:52.523 "num_base_bdevs_discovered": 3, 00:27:52.523 "num_base_bdevs_operational": 3, 00:27:52.523 "process": { 00:27:52.523 "type": "rebuild", 00:27:52.523 "target": "spare", 00:27:52.523 "progress": { 00:27:52.523 "blocks": 24576, 00:27:52.523 "percent": 37 00:27:52.523 } 00:27:52.523 }, 00:27:52.523 "base_bdevs_list": [ 00:27:52.523 { 00:27:52.523 "name": "spare", 00:27:52.523 "uuid": "47ffce25-7b94-5a6f-a706-acf804ef3325", 00:27:52.523 "is_configured": true, 00:27:52.523 "data_offset": 0, 00:27:52.523 "data_size": 65536 00:27:52.523 }, 00:27:52.523 { 00:27:52.523 "name": null, 00:27:52.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:52.523 "is_configured": false, 00:27:52.523 "data_offset": 0, 00:27:52.523 "data_size": 65536 00:27:52.523 }, 00:27:52.523 { 00:27:52.523 "name": "BaseBdev3", 00:27:52.523 "uuid": "6d24ad34-9a1e-5ee1-936d-c46b61a7e60f", 00:27:52.523 "is_configured": true, 00:27:52.523 "data_offset": 0, 00:27:52.523 "data_size": 65536 00:27:52.523 }, 00:27:52.523 { 00:27:52.523 "name": "BaseBdev4", 00:27:52.523 "uuid": "135df408-f536-50ed-864f-1ae8169019fe", 00:27:52.523 "is_configured": true, 00:27:52.523 "data_offset": 0, 00:27:52.523 "data_size": 65536 00:27:52.523 } 00:27:52.523 ] 00:27:52.523 }' 00:27:52.523 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:52.523 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:52.523 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:52.523 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:52.523 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=486 00:27:52.523 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:52.523 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:52.523 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:52.523 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:52.523 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:52.523 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:52.523 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:52.523 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:52.523 13:47:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.523 13:47:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.523 13:47:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.782 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:52.782 "name": "raid_bdev1", 00:27:52.782 "uuid": "1de4f12a-7cd0-42e3-83af-58b415ea466e", 00:27:52.782 "strip_size_kb": 0, 00:27:52.782 "state": "online", 00:27:52.782 "raid_level": "raid1", 00:27:52.782 "superblock": false, 00:27:52.782 "num_base_bdevs": 4, 00:27:52.782 "num_base_bdevs_discovered": 3, 00:27:52.782 "num_base_bdevs_operational": 3, 00:27:52.782 "process": { 00:27:52.782 "type": "rebuild", 00:27:52.782 "target": "spare", 00:27:52.782 "progress": { 00:27:52.782 "blocks": 26624, 00:27:52.782 "percent": 40 00:27:52.782 } 00:27:52.782 }, 00:27:52.782 "base_bdevs_list": [ 00:27:52.782 { 00:27:52.782 "name": "spare", 00:27:52.782 "uuid": "47ffce25-7b94-5a6f-a706-acf804ef3325", 00:27:52.782 "is_configured": true, 00:27:52.782 "data_offset": 0, 00:27:52.782 "data_size": 65536 00:27:52.782 }, 00:27:52.782 { 00:27:52.782 "name": null, 00:27:52.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:52.782 "is_configured": false, 00:27:52.782 "data_offset": 0, 00:27:52.782 "data_size": 65536 00:27:52.782 }, 00:27:52.782 { 00:27:52.782 "name": "BaseBdev3", 00:27:52.782 "uuid": "6d24ad34-9a1e-5ee1-936d-c46b61a7e60f", 00:27:52.782 "is_configured": true, 00:27:52.782 "data_offset": 0, 00:27:52.782 "data_size": 65536 00:27:52.782 }, 00:27:52.782 { 00:27:52.782 "name": "BaseBdev4", 00:27:52.782 "uuid": "135df408-f536-50ed-864f-1ae8169019fe", 00:27:52.782 "is_configured": true, 00:27:52.782 "data_offset": 0, 00:27:52.782 "data_size": 65536 00:27:52.782 } 00:27:52.782 ] 00:27:52.782 }' 00:27:52.782 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:52.782 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:52.782 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:52.782 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:52.782 13:47:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:53.795 13:47:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:53.795 13:47:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:53.795 13:47:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:53.795 13:47:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:53.795 13:47:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:53.795 13:47:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:53.795 13:47:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:53.795 13:47:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:53.795 13:47:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.795 13:47:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.795 13:47:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.795 13:47:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:53.795 "name": "raid_bdev1", 00:27:53.795 "uuid": "1de4f12a-7cd0-42e3-83af-58b415ea466e", 00:27:53.795 "strip_size_kb": 0, 00:27:53.795 "state": "online", 00:27:53.795 "raid_level": "raid1", 00:27:53.795 "superblock": false, 00:27:53.795 "num_base_bdevs": 4, 00:27:53.795 "num_base_bdevs_discovered": 3, 00:27:53.795 "num_base_bdevs_operational": 3, 00:27:53.795 "process": { 00:27:53.795 "type": "rebuild", 00:27:53.795 "target": "spare", 00:27:53.795 "progress": { 00:27:53.795 "blocks": 51200, 00:27:53.795 "percent": 78 00:27:53.795 } 00:27:53.795 }, 00:27:53.795 "base_bdevs_list": [ 00:27:53.795 { 00:27:53.795 "name": "spare", 00:27:53.795 "uuid": "47ffce25-7b94-5a6f-a706-acf804ef3325", 00:27:53.795 "is_configured": true, 00:27:53.795 "data_offset": 0, 00:27:53.795 "data_size": 65536 00:27:53.795 }, 00:27:53.795 { 00:27:53.795 "name": null, 00:27:53.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:53.795 "is_configured": false, 00:27:53.795 "data_offset": 0, 00:27:53.795 "data_size": 65536 00:27:53.795 }, 00:27:53.795 { 00:27:53.795 "name": "BaseBdev3", 00:27:53.795 "uuid": "6d24ad34-9a1e-5ee1-936d-c46b61a7e60f", 00:27:53.795 "is_configured": true, 00:27:53.795 "data_offset": 0, 00:27:53.795 "data_size": 65536 00:27:53.795 }, 00:27:53.795 { 00:27:53.795 "name": "BaseBdev4", 00:27:53.795 "uuid": "135df408-f536-50ed-864f-1ae8169019fe", 00:27:53.795 "is_configured": true, 00:27:53.795 "data_offset": 0, 00:27:53.795 "data_size": 65536 00:27:53.795 } 00:27:53.795 ] 00:27:53.795 }' 00:27:53.795 13:47:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:53.795 13:47:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:53.795 13:47:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:54.054 13:47:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:27:54.054 13:47:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:27:54.621 [2024-11-20 13:47:57.233948] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:54.621 [2024-11-20 13:47:57.234066] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:54.621 [2024-11-20 13:47:57.234139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:54.879 13:47:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:27:54.879 13:47:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:54.879 13:47:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:54.879 13:47:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:27:54.879 13:47:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:27:54.879 13:47:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:54.879 13:47:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:54.879 13:47:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:54.879 13:47:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.879 13:47:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.879 13:47:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.879 13:47:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:54.879 "name": "raid_bdev1", 00:27:54.879 "uuid": "1de4f12a-7cd0-42e3-83af-58b415ea466e", 00:27:54.879 "strip_size_kb": 0, 00:27:54.879 "state": "online", 00:27:54.879 "raid_level": "raid1", 00:27:54.879 "superblock": false, 00:27:54.879 "num_base_bdevs": 4, 00:27:54.879 "num_base_bdevs_discovered": 3, 00:27:54.879 "num_base_bdevs_operational": 3, 00:27:54.879 "base_bdevs_list": [ 00:27:54.879 { 00:27:54.879 "name": "spare", 00:27:54.879 "uuid": "47ffce25-7b94-5a6f-a706-acf804ef3325", 00:27:54.879 "is_configured": true, 00:27:54.879 "data_offset": 0, 00:27:54.879 "data_size": 65536 00:27:54.879 }, 00:27:54.879 { 00:27:54.879 "name": null, 00:27:54.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:54.879 "is_configured": false, 00:27:54.879 "data_offset": 0, 00:27:54.879 "data_size": 65536 00:27:54.879 }, 00:27:54.879 { 00:27:54.879 "name": "BaseBdev3", 00:27:54.879 "uuid": "6d24ad34-9a1e-5ee1-936d-c46b61a7e60f", 00:27:54.879 "is_configured": true, 00:27:54.879 "data_offset": 0, 00:27:54.879 "data_size": 65536 00:27:54.879 }, 00:27:54.879 { 00:27:54.879 "name": "BaseBdev4", 00:27:54.879 "uuid": "135df408-f536-50ed-864f-1ae8169019fe", 00:27:54.879 "is_configured": true, 00:27:54.879 "data_offset": 0, 00:27:54.879 "data_size": 65536 00:27:54.879 } 00:27:54.879 ] 00:27:54.879 }' 00:27:54.879 13:47:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:55.138 13:47:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:55.138 13:47:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:55.138 13:47:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:27:55.138 13:47:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:27:55.138 13:47:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:55.138 13:47:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:27:55.138 13:47:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:27:55.138 13:47:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:27:55.138 13:47:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:27:55.138 13:47:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:55.138 13:47:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.138 13:47:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:55.138 13:47:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.138 13:47:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.138 13:47:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:27:55.138 "name": "raid_bdev1", 00:27:55.138 "uuid": "1de4f12a-7cd0-42e3-83af-58b415ea466e", 00:27:55.138 "strip_size_kb": 0, 00:27:55.138 "state": "online", 00:27:55.138 "raid_level": "raid1", 00:27:55.138 "superblock": false, 00:27:55.138 "num_base_bdevs": 4, 00:27:55.138 "num_base_bdevs_discovered": 3, 00:27:55.138 "num_base_bdevs_operational": 3, 00:27:55.138 "base_bdevs_list": [ 00:27:55.138 { 00:27:55.138 "name": "spare", 00:27:55.138 "uuid": "47ffce25-7b94-5a6f-a706-acf804ef3325", 00:27:55.138 "is_configured": true, 00:27:55.138 "data_offset": 0, 00:27:55.138 "data_size": 65536 00:27:55.138 }, 00:27:55.138 { 00:27:55.138 "name": null, 00:27:55.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:55.138 "is_configured": false, 00:27:55.138 "data_offset": 0, 00:27:55.138 "data_size": 65536 00:27:55.138 }, 00:27:55.138 { 00:27:55.138 "name": "BaseBdev3", 00:27:55.138 "uuid": "6d24ad34-9a1e-5ee1-936d-c46b61a7e60f", 00:27:55.138 "is_configured": true, 00:27:55.138 "data_offset": 0, 00:27:55.138 "data_size": 65536 00:27:55.138 }, 00:27:55.138 { 00:27:55.138 "name": "BaseBdev4", 00:27:55.138 "uuid": "135df408-f536-50ed-864f-1ae8169019fe", 00:27:55.138 "is_configured": true, 00:27:55.138 "data_offset": 0, 00:27:55.138 "data_size": 65536 00:27:55.138 } 00:27:55.138 ] 00:27:55.138 }' 00:27:55.138 13:47:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:27:55.138 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:27:55.138 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:27:55.396 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:27:55.396 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:55.396 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:55.396 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:55.396 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:55.396 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:55.396 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:55.396 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:55.396 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:55.396 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:55.396 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:55.397 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:55.397 13:47:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.397 13:47:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.397 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:55.397 13:47:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.397 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:55.397 "name": "raid_bdev1", 00:27:55.397 "uuid": "1de4f12a-7cd0-42e3-83af-58b415ea466e", 00:27:55.397 "strip_size_kb": 0, 00:27:55.397 "state": "online", 00:27:55.397 "raid_level": "raid1", 00:27:55.397 "superblock": false, 00:27:55.397 "num_base_bdevs": 4, 00:27:55.397 "num_base_bdevs_discovered": 3, 00:27:55.397 "num_base_bdevs_operational": 3, 00:27:55.397 "base_bdevs_list": [ 00:27:55.397 { 00:27:55.397 "name": "spare", 00:27:55.397 "uuid": "47ffce25-7b94-5a6f-a706-acf804ef3325", 00:27:55.397 "is_configured": true, 00:27:55.397 "data_offset": 0, 00:27:55.397 "data_size": 65536 00:27:55.397 }, 00:27:55.397 { 00:27:55.397 "name": null, 00:27:55.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:55.397 "is_configured": false, 00:27:55.397 "data_offset": 0, 00:27:55.397 "data_size": 65536 00:27:55.397 }, 00:27:55.397 { 00:27:55.397 "name": "BaseBdev3", 00:27:55.397 "uuid": "6d24ad34-9a1e-5ee1-936d-c46b61a7e60f", 00:27:55.397 "is_configured": true, 00:27:55.397 "data_offset": 0, 00:27:55.397 "data_size": 65536 00:27:55.397 }, 00:27:55.397 { 00:27:55.397 "name": "BaseBdev4", 00:27:55.397 "uuid": "135df408-f536-50ed-864f-1ae8169019fe", 00:27:55.397 "is_configured": true, 00:27:55.397 "data_offset": 0, 00:27:55.397 "data_size": 65536 00:27:55.397 } 00:27:55.397 ] 00:27:55.397 }' 00:27:55.397 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:55.397 13:47:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.963 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:55.963 13:47:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.963 13:47:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.963 [2024-11-20 13:47:58.581969] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:55.963 [2024-11-20 13:47:58.582177] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:55.963 [2024-11-20 13:47:58.582387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:55.963 [2024-11-20 13:47:58.582657] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:55.963 [2024-11-20 13:47:58.582783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:55.963 13:47:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.963 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:55.963 13:47:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.963 13:47:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:55.963 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:27:55.963 13:47:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.963 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:27:55.963 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:27:55.963 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:27:55.963 13:47:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:55.963 13:47:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:27:55.963 13:47:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:55.963 13:47:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:55.963 13:47:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:55.963 13:47:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:55.963 13:47:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:27:55.963 13:47:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:55.963 13:47:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:55.964 13:47:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:56.221 /dev/nbd0 00:27:56.221 13:47:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:56.221 13:47:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:56.221 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:56.221 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:27:56.221 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:56.221 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:56.221 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:56.221 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:27:56.221 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:56.221 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:56.221 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:56.221 1+0 records in 00:27:56.221 1+0 records out 00:27:56.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448262 s, 9.1 MB/s 00:27:56.221 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:56.221 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:27:56.221 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:56.221 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:56.221 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:27:56.221 13:47:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:56.221 13:47:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:56.221 13:47:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:27:56.477 /dev/nbd1 00:27:56.477 13:47:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:56.477 13:47:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:56.477 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:27:56.477 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:27:56.477 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:56.477 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:56.477 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:27:56.477 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:27:56.477 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:56.477 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:56.477 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:56.477 1+0 records in 00:27:56.477 1+0 records out 00:27:56.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422468 s, 9.7 MB/s 00:27:56.477 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:56.477 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:27:56.477 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:56.477 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:56.477 13:47:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:27:56.477 13:47:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:56.477 13:47:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:56.477 13:47:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:56.735 13:47:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:27:56.735 13:47:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:27:56.735 13:47:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:56.735 13:47:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:56.735 13:47:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:27:56.735 13:47:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:56.735 13:47:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:27:57.072 13:47:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:57.072 13:47:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:57.072 13:47:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:57.072 13:47:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:57.072 13:47:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:57.072 13:47:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:57.072 13:47:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:57.072 13:47:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:57.072 13:47:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:57.072 13:47:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:27:57.330 13:48:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:57.330 13:48:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:57.330 13:48:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:57.330 13:48:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:57.330 13:48:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:57.330 13:48:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:57.330 13:48:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:57.330 13:48:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:57.330 13:48:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:27:57.330 13:48:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77998 00:27:57.330 13:48:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77998 ']' 00:27:57.330 13:48:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77998 00:27:57.330 13:48:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:27:57.330 13:48:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:57.330 13:48:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77998 00:27:57.330 killing process with pid 77998 00:27:57.330 Received shutdown signal, test time was about 60.000000 seconds 00:27:57.330 00:27:57.330 Latency(us) 00:27:57.330 [2024-11-20T13:48:00.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:57.330 [2024-11-20T13:48:00.247Z] =================================================================================================================== 00:27:57.330 [2024-11-20T13:48:00.247Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:57.330 13:48:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:57.330 13:48:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:57.330 13:48:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77998' 00:27:57.330 13:48:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77998 00:27:57.330 [2024-11-20 13:48:00.153854] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:57.330 13:48:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77998 00:27:57.894 [2024-11-20 13:48:00.593680] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:58.828 ************************************ 00:27:58.828 END TEST raid_rebuild_test 00:27:58.828 ************************************ 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:27:58.828 00:27:58.828 real 0m21.403s 00:27:58.828 user 0m24.128s 00:27:58.828 sys 0m3.693s 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.828 13:48:01 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:27:58.828 13:48:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:27:58.828 13:48:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:58.828 13:48:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:58.828 ************************************ 00:27:58.828 START TEST raid_rebuild_test_sb 00:27:58.828 ************************************ 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:27:58.828 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:27:58.829 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:27:58.829 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:27:58.829 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:27:58.829 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:27:58.829 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:27:58.829 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78483 00:27:58.829 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78483 00:27:58.829 13:48:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:58.829 13:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78483 ']' 00:27:58.829 13:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.829 13:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:58.829 13:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.829 13:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:58.829 13:48:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:59.086 [2024-11-20 13:48:01.820858] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:27:59.086 [2024-11-20 13:48:01.821289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78483 ] 00:27:59.086 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:59.086 Zero copy mechanism will not be used. 00:27:59.345 [2024-11-20 13:48:02.010071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.345 [2024-11-20 13:48:02.166451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.603 [2024-11-20 13:48:02.393456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:59.603 [2024-11-20 13:48:02.393680] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:59.862 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:59.862 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:27:59.862 13:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:27:59.862 13:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:59.862 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.862 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.121 BaseBdev1_malloc 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.121 [2024-11-20 13:48:02.804655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:00.121 [2024-11-20 13:48:02.804880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:00.121 [2024-11-20 13:48:02.805061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:00.121 [2024-11-20 13:48:02.805195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:00.121 [2024-11-20 13:48:02.808067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:00.121 [2024-11-20 13:48:02.808120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:00.121 BaseBdev1 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.121 BaseBdev2_malloc 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.121 [2024-11-20 13:48:02.857062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:00.121 [2024-11-20 13:48:02.857144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:00.121 [2024-11-20 13:48:02.857177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:00.121 [2024-11-20 13:48:02.857196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:00.121 [2024-11-20 13:48:02.860038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:00.121 [2024-11-20 13:48:02.860089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:00.121 BaseBdev2 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.121 BaseBdev3_malloc 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.121 [2024-11-20 13:48:02.921825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:28:00.121 [2024-11-20 13:48:02.921929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:00.121 [2024-11-20 13:48:02.921979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:00.121 [2024-11-20 13:48:02.922012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:00.121 [2024-11-20 13:48:02.924867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:00.121 [2024-11-20 13:48:02.924934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:00.121 BaseBdev3 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.121 BaseBdev4_malloc 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.121 [2024-11-20 13:48:02.978710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:28:00.121 [2024-11-20 13:48:02.978797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:00.121 [2024-11-20 13:48:02.978830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:28:00.121 [2024-11-20 13:48:02.978848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:00.121 [2024-11-20 13:48:02.981747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:00.121 [2024-11-20 13:48:02.981802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:00.121 BaseBdev4 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.121 13:48:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.121 spare_malloc 00:28:00.121 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.121 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:00.121 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.122 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.380 spare_delay 00:28:00.380 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.380 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:00.380 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.380 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.380 [2024-11-20 13:48:03.039643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:00.380 [2024-11-20 13:48:03.039736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:00.381 [2024-11-20 13:48:03.039763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:28:00.381 [2024-11-20 13:48:03.039780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:00.381 [2024-11-20 13:48:03.042584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:00.381 [2024-11-20 13:48:03.042773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:00.381 spare 00:28:00.381 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.381 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:28:00.381 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.381 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.381 [2024-11-20 13:48:03.047770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:00.381 [2024-11-20 13:48:03.050381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:00.381 [2024-11-20 13:48:03.050473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:00.381 [2024-11-20 13:48:03.050574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:00.381 [2024-11-20 13:48:03.050833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:00.381 [2024-11-20 13:48:03.050861] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:00.381 [2024-11-20 13:48:03.051232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:00.381 [2024-11-20 13:48:03.051481] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:00.381 [2024-11-20 13:48:03.051499] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:00.381 [2024-11-20 13:48:03.051779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:00.381 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.381 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:00.381 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:00.381 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:00.381 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:00.381 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:00.381 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:00.381 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:00.381 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:00.381 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:00.381 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:00.381 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:00.381 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.381 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.381 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.381 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.381 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:00.381 "name": "raid_bdev1", 00:28:00.381 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:00.381 "strip_size_kb": 0, 00:28:00.381 "state": "online", 00:28:00.381 "raid_level": "raid1", 00:28:00.381 "superblock": true, 00:28:00.381 "num_base_bdevs": 4, 00:28:00.381 "num_base_bdevs_discovered": 4, 00:28:00.381 "num_base_bdevs_operational": 4, 00:28:00.381 "base_bdevs_list": [ 00:28:00.381 { 00:28:00.381 "name": "BaseBdev1", 00:28:00.381 "uuid": "99c6e29e-cf00-5c3b-bc3a-4deb0da52414", 00:28:00.381 "is_configured": true, 00:28:00.381 "data_offset": 2048, 00:28:00.381 "data_size": 63488 00:28:00.381 }, 00:28:00.381 { 00:28:00.381 "name": "BaseBdev2", 00:28:00.381 "uuid": "bcd856b5-f16c-5d35-a273-8b9614290e63", 00:28:00.381 "is_configured": true, 00:28:00.381 "data_offset": 2048, 00:28:00.381 "data_size": 63488 00:28:00.381 }, 00:28:00.381 { 00:28:00.381 "name": "BaseBdev3", 00:28:00.381 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:00.381 "is_configured": true, 00:28:00.381 "data_offset": 2048, 00:28:00.381 "data_size": 63488 00:28:00.381 }, 00:28:00.381 { 00:28:00.381 "name": "BaseBdev4", 00:28:00.381 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:00.381 "is_configured": true, 00:28:00.381 "data_offset": 2048, 00:28:00.381 "data_size": 63488 00:28:00.381 } 00:28:00.381 ] 00:28:00.381 }' 00:28:00.381 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:00.381 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.639 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:28:00.639 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:00.639 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.639 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.639 [2024-11-20 13:48:03.520434] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:00.639 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.896 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:28:00.896 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:00.896 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:00.896 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.896 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.896 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.896 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:28:00.896 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:28:00.896 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:28:00.896 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:28:00.896 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:28:00.896 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:00.896 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:00.896 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:00.896 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:00.896 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:00.896 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:28:00.896 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:00.896 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:00.896 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:01.154 [2024-11-20 13:48:03.880179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:28:01.154 /dev/nbd0 00:28:01.154 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:01.154 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:01.154 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:01.154 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:28:01.154 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:01.154 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:01.154 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:01.154 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:28:01.154 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:01.154 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:01.154 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:01.154 1+0 records in 00:28:01.154 1+0 records out 00:28:01.154 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339603 s, 12.1 MB/s 00:28:01.154 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:01.154 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:28:01.154 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:01.154 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:01.154 13:48:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:28:01.154 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:01.154 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:01.154 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:28:01.154 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:28:01.154 13:48:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:28:11.133 63488+0 records in 00:28:11.133 63488+0 records out 00:28:11.133 32505856 bytes (33 MB, 31 MiB) copied, 8.73774 s, 3.7 MB/s 00:28:11.133 13:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:28:11.133 13:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:11.133 13:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:11.133 13:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:11.133 13:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:28:11.133 13:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:11.133 13:48:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:11.133 [2024-11-20 13:48:13.003040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:11.133 [2024-11-20 13:48:13.019222] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:11.133 "name": "raid_bdev1", 00:28:11.133 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:11.133 "strip_size_kb": 0, 00:28:11.133 "state": "online", 00:28:11.133 "raid_level": "raid1", 00:28:11.133 "superblock": true, 00:28:11.133 "num_base_bdevs": 4, 00:28:11.133 "num_base_bdevs_discovered": 3, 00:28:11.133 "num_base_bdevs_operational": 3, 00:28:11.133 "base_bdevs_list": [ 00:28:11.133 { 00:28:11.133 "name": null, 00:28:11.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:11.133 "is_configured": false, 00:28:11.133 "data_offset": 0, 00:28:11.133 "data_size": 63488 00:28:11.133 }, 00:28:11.133 { 00:28:11.133 "name": "BaseBdev2", 00:28:11.133 "uuid": "bcd856b5-f16c-5d35-a273-8b9614290e63", 00:28:11.133 "is_configured": true, 00:28:11.133 "data_offset": 2048, 00:28:11.133 "data_size": 63488 00:28:11.133 }, 00:28:11.133 { 00:28:11.133 "name": "BaseBdev3", 00:28:11.133 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:11.133 "is_configured": true, 00:28:11.133 "data_offset": 2048, 00:28:11.133 "data_size": 63488 00:28:11.133 }, 00:28:11.133 { 00:28:11.133 "name": "BaseBdev4", 00:28:11.133 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:11.133 "is_configured": true, 00:28:11.133 "data_offset": 2048, 00:28:11.133 "data_size": 63488 00:28:11.133 } 00:28:11.133 ] 00:28:11.133 }' 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:11.133 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:11.134 13:48:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.134 13:48:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:11.134 [2024-11-20 13:48:13.507454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:11.134 [2024-11-20 13:48:13.521949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:28:11.134 13:48:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.134 13:48:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:28:11.134 [2024-11-20 13:48:13.524622] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:11.701 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:11.701 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:11.701 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:11.701 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:11.701 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:11.701 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:11.701 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:11.701 13:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.701 13:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:11.701 13:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.701 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:11.701 "name": "raid_bdev1", 00:28:11.701 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:11.701 "strip_size_kb": 0, 00:28:11.701 "state": "online", 00:28:11.701 "raid_level": "raid1", 00:28:11.701 "superblock": true, 00:28:11.701 "num_base_bdevs": 4, 00:28:11.701 "num_base_bdevs_discovered": 4, 00:28:11.701 "num_base_bdevs_operational": 4, 00:28:11.701 "process": { 00:28:11.701 "type": "rebuild", 00:28:11.701 "target": "spare", 00:28:11.701 "progress": { 00:28:11.701 "blocks": 20480, 00:28:11.701 "percent": 32 00:28:11.701 } 00:28:11.701 }, 00:28:11.701 "base_bdevs_list": [ 00:28:11.701 { 00:28:11.701 "name": "spare", 00:28:11.701 "uuid": "08e6945d-f091-5770-8bfd-b2ed0d395943", 00:28:11.701 "is_configured": true, 00:28:11.701 "data_offset": 2048, 00:28:11.701 "data_size": 63488 00:28:11.701 }, 00:28:11.701 { 00:28:11.701 "name": "BaseBdev2", 00:28:11.701 "uuid": "bcd856b5-f16c-5d35-a273-8b9614290e63", 00:28:11.701 "is_configured": true, 00:28:11.701 "data_offset": 2048, 00:28:11.701 "data_size": 63488 00:28:11.701 }, 00:28:11.701 { 00:28:11.701 "name": "BaseBdev3", 00:28:11.701 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:11.701 "is_configured": true, 00:28:11.701 "data_offset": 2048, 00:28:11.701 "data_size": 63488 00:28:11.701 }, 00:28:11.701 { 00:28:11.701 "name": "BaseBdev4", 00:28:11.701 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:11.701 "is_configured": true, 00:28:11.701 "data_offset": 2048, 00:28:11.701 "data_size": 63488 00:28:11.701 } 00:28:11.701 ] 00:28:11.701 }' 00:28:11.701 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:11.961 [2024-11-20 13:48:14.714114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:11.961 [2024-11-20 13:48:14.734070] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:11.961 [2024-11-20 13:48:14.734315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:11.961 [2024-11-20 13:48:14.734483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:11.961 [2024-11-20 13:48:14.734546] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:11.961 "name": "raid_bdev1", 00:28:11.961 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:11.961 "strip_size_kb": 0, 00:28:11.961 "state": "online", 00:28:11.961 "raid_level": "raid1", 00:28:11.961 "superblock": true, 00:28:11.961 "num_base_bdevs": 4, 00:28:11.961 "num_base_bdevs_discovered": 3, 00:28:11.961 "num_base_bdevs_operational": 3, 00:28:11.961 "base_bdevs_list": [ 00:28:11.961 { 00:28:11.961 "name": null, 00:28:11.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:11.961 "is_configured": false, 00:28:11.961 "data_offset": 0, 00:28:11.961 "data_size": 63488 00:28:11.961 }, 00:28:11.961 { 00:28:11.961 "name": "BaseBdev2", 00:28:11.961 "uuid": "bcd856b5-f16c-5d35-a273-8b9614290e63", 00:28:11.961 "is_configured": true, 00:28:11.961 "data_offset": 2048, 00:28:11.961 "data_size": 63488 00:28:11.961 }, 00:28:11.961 { 00:28:11.961 "name": "BaseBdev3", 00:28:11.961 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:11.961 "is_configured": true, 00:28:11.961 "data_offset": 2048, 00:28:11.961 "data_size": 63488 00:28:11.961 }, 00:28:11.961 { 00:28:11.961 "name": "BaseBdev4", 00:28:11.961 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:11.961 "is_configured": true, 00:28:11.961 "data_offset": 2048, 00:28:11.961 "data_size": 63488 00:28:11.961 } 00:28:11.961 ] 00:28:11.961 }' 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:11.961 13:48:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:12.530 13:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:12.530 13:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:12.530 13:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:12.530 13:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:12.530 13:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:12.530 13:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:12.530 13:48:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.530 13:48:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:12.530 13:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:12.530 13:48:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.530 13:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:12.530 "name": "raid_bdev1", 00:28:12.530 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:12.530 "strip_size_kb": 0, 00:28:12.530 "state": "online", 00:28:12.530 "raid_level": "raid1", 00:28:12.530 "superblock": true, 00:28:12.530 "num_base_bdevs": 4, 00:28:12.530 "num_base_bdevs_discovered": 3, 00:28:12.530 "num_base_bdevs_operational": 3, 00:28:12.530 "base_bdevs_list": [ 00:28:12.530 { 00:28:12.530 "name": null, 00:28:12.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:12.530 "is_configured": false, 00:28:12.530 "data_offset": 0, 00:28:12.530 "data_size": 63488 00:28:12.530 }, 00:28:12.530 { 00:28:12.530 "name": "BaseBdev2", 00:28:12.530 "uuid": "bcd856b5-f16c-5d35-a273-8b9614290e63", 00:28:12.530 "is_configured": true, 00:28:12.530 "data_offset": 2048, 00:28:12.530 "data_size": 63488 00:28:12.530 }, 00:28:12.530 { 00:28:12.530 "name": "BaseBdev3", 00:28:12.530 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:12.530 "is_configured": true, 00:28:12.530 "data_offset": 2048, 00:28:12.530 "data_size": 63488 00:28:12.530 }, 00:28:12.530 { 00:28:12.530 "name": "BaseBdev4", 00:28:12.530 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:12.530 "is_configured": true, 00:28:12.530 "data_offset": 2048, 00:28:12.530 "data_size": 63488 00:28:12.530 } 00:28:12.530 ] 00:28:12.530 }' 00:28:12.530 13:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:12.530 13:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:12.530 13:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:12.530 13:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:12.530 13:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:12.530 13:48:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.530 13:48:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:12.530 [2024-11-20 13:48:15.411347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:12.530 [2024-11-20 13:48:15.425510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:28:12.530 13:48:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.530 13:48:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:28:12.530 [2024-11-20 13:48:15.428087] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:13.907 "name": "raid_bdev1", 00:28:13.907 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:13.907 "strip_size_kb": 0, 00:28:13.907 "state": "online", 00:28:13.907 "raid_level": "raid1", 00:28:13.907 "superblock": true, 00:28:13.907 "num_base_bdevs": 4, 00:28:13.907 "num_base_bdevs_discovered": 4, 00:28:13.907 "num_base_bdevs_operational": 4, 00:28:13.907 "process": { 00:28:13.907 "type": "rebuild", 00:28:13.907 "target": "spare", 00:28:13.907 "progress": { 00:28:13.907 "blocks": 20480, 00:28:13.907 "percent": 32 00:28:13.907 } 00:28:13.907 }, 00:28:13.907 "base_bdevs_list": [ 00:28:13.907 { 00:28:13.907 "name": "spare", 00:28:13.907 "uuid": "08e6945d-f091-5770-8bfd-b2ed0d395943", 00:28:13.907 "is_configured": true, 00:28:13.907 "data_offset": 2048, 00:28:13.907 "data_size": 63488 00:28:13.907 }, 00:28:13.907 { 00:28:13.907 "name": "BaseBdev2", 00:28:13.907 "uuid": "bcd856b5-f16c-5d35-a273-8b9614290e63", 00:28:13.907 "is_configured": true, 00:28:13.907 "data_offset": 2048, 00:28:13.907 "data_size": 63488 00:28:13.907 }, 00:28:13.907 { 00:28:13.907 "name": "BaseBdev3", 00:28:13.907 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:13.907 "is_configured": true, 00:28:13.907 "data_offset": 2048, 00:28:13.907 "data_size": 63488 00:28:13.907 }, 00:28:13.907 { 00:28:13.907 "name": "BaseBdev4", 00:28:13.907 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:13.907 "is_configured": true, 00:28:13.907 "data_offset": 2048, 00:28:13.907 "data_size": 63488 00:28:13.907 } 00:28:13.907 ] 00:28:13.907 }' 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:28:13.907 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.907 [2024-11-20 13:48:16.597789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:13.907 [2024-11-20 13:48:16.737772] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.907 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:13.907 "name": "raid_bdev1", 00:28:13.907 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:13.907 "strip_size_kb": 0, 00:28:13.907 "state": "online", 00:28:13.907 "raid_level": "raid1", 00:28:13.907 "superblock": true, 00:28:13.907 "num_base_bdevs": 4, 00:28:13.907 "num_base_bdevs_discovered": 3, 00:28:13.907 "num_base_bdevs_operational": 3, 00:28:13.907 "process": { 00:28:13.907 "type": "rebuild", 00:28:13.907 "target": "spare", 00:28:13.907 "progress": { 00:28:13.907 "blocks": 24576, 00:28:13.907 "percent": 38 00:28:13.907 } 00:28:13.907 }, 00:28:13.908 "base_bdevs_list": [ 00:28:13.908 { 00:28:13.908 "name": "spare", 00:28:13.908 "uuid": "08e6945d-f091-5770-8bfd-b2ed0d395943", 00:28:13.908 "is_configured": true, 00:28:13.908 "data_offset": 2048, 00:28:13.908 "data_size": 63488 00:28:13.908 }, 00:28:13.908 { 00:28:13.908 "name": null, 00:28:13.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:13.908 "is_configured": false, 00:28:13.908 "data_offset": 0, 00:28:13.908 "data_size": 63488 00:28:13.908 }, 00:28:13.908 { 00:28:13.908 "name": "BaseBdev3", 00:28:13.908 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:13.908 "is_configured": true, 00:28:13.908 "data_offset": 2048, 00:28:13.908 "data_size": 63488 00:28:13.908 }, 00:28:13.908 { 00:28:13.908 "name": "BaseBdev4", 00:28:13.908 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:13.908 "is_configured": true, 00:28:13.908 "data_offset": 2048, 00:28:13.908 "data_size": 63488 00:28:13.908 } 00:28:13.908 ] 00:28:13.908 }' 00:28:13.908 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:14.167 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:14.167 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:14.167 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:14.167 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=507 00:28:14.167 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:14.167 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:14.167 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:14.167 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:14.167 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:14.167 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:14.167 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:14.167 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:14.167 13:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.167 13:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:14.167 13:48:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.167 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:14.167 "name": "raid_bdev1", 00:28:14.167 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:14.167 "strip_size_kb": 0, 00:28:14.167 "state": "online", 00:28:14.167 "raid_level": "raid1", 00:28:14.167 "superblock": true, 00:28:14.167 "num_base_bdevs": 4, 00:28:14.167 "num_base_bdevs_discovered": 3, 00:28:14.167 "num_base_bdevs_operational": 3, 00:28:14.167 "process": { 00:28:14.167 "type": "rebuild", 00:28:14.167 "target": "spare", 00:28:14.167 "progress": { 00:28:14.167 "blocks": 26624, 00:28:14.167 "percent": 41 00:28:14.167 } 00:28:14.167 }, 00:28:14.167 "base_bdevs_list": [ 00:28:14.167 { 00:28:14.167 "name": "spare", 00:28:14.167 "uuid": "08e6945d-f091-5770-8bfd-b2ed0d395943", 00:28:14.167 "is_configured": true, 00:28:14.167 "data_offset": 2048, 00:28:14.167 "data_size": 63488 00:28:14.167 }, 00:28:14.167 { 00:28:14.167 "name": null, 00:28:14.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:14.167 "is_configured": false, 00:28:14.167 "data_offset": 0, 00:28:14.167 "data_size": 63488 00:28:14.167 }, 00:28:14.167 { 00:28:14.167 "name": "BaseBdev3", 00:28:14.167 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:14.167 "is_configured": true, 00:28:14.167 "data_offset": 2048, 00:28:14.167 "data_size": 63488 00:28:14.167 }, 00:28:14.167 { 00:28:14.167 "name": "BaseBdev4", 00:28:14.167 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:14.167 "is_configured": true, 00:28:14.167 "data_offset": 2048, 00:28:14.167 "data_size": 63488 00:28:14.167 } 00:28:14.167 ] 00:28:14.167 }' 00:28:14.167 13:48:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:14.167 13:48:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:14.167 13:48:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:14.167 13:48:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:14.167 13:48:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:15.543 13:48:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:15.543 13:48:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:15.543 13:48:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:15.543 13:48:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:15.543 13:48:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:15.543 13:48:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:15.543 13:48:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:15.543 13:48:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.544 13:48:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.544 13:48:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.544 13:48:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.544 13:48:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:15.544 "name": "raid_bdev1", 00:28:15.544 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:15.544 "strip_size_kb": 0, 00:28:15.544 "state": "online", 00:28:15.544 "raid_level": "raid1", 00:28:15.544 "superblock": true, 00:28:15.544 "num_base_bdevs": 4, 00:28:15.544 "num_base_bdevs_discovered": 3, 00:28:15.544 "num_base_bdevs_operational": 3, 00:28:15.544 "process": { 00:28:15.544 "type": "rebuild", 00:28:15.544 "target": "spare", 00:28:15.544 "progress": { 00:28:15.544 "blocks": 51200, 00:28:15.544 "percent": 80 00:28:15.544 } 00:28:15.544 }, 00:28:15.544 "base_bdevs_list": [ 00:28:15.544 { 00:28:15.544 "name": "spare", 00:28:15.544 "uuid": "08e6945d-f091-5770-8bfd-b2ed0d395943", 00:28:15.544 "is_configured": true, 00:28:15.544 "data_offset": 2048, 00:28:15.544 "data_size": 63488 00:28:15.544 }, 00:28:15.544 { 00:28:15.544 "name": null, 00:28:15.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:15.544 "is_configured": false, 00:28:15.544 "data_offset": 0, 00:28:15.544 "data_size": 63488 00:28:15.544 }, 00:28:15.544 { 00:28:15.544 "name": "BaseBdev3", 00:28:15.544 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:15.544 "is_configured": true, 00:28:15.544 "data_offset": 2048, 00:28:15.544 "data_size": 63488 00:28:15.544 }, 00:28:15.544 { 00:28:15.544 "name": "BaseBdev4", 00:28:15.544 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:15.544 "is_configured": true, 00:28:15.544 "data_offset": 2048, 00:28:15.544 "data_size": 63488 00:28:15.544 } 00:28:15.544 ] 00:28:15.544 }' 00:28:15.544 13:48:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:15.544 13:48:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:15.544 13:48:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:15.544 13:48:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:15.544 13:48:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:15.803 [2024-11-20 13:48:18.653082] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:15.803 [2024-11-20 13:48:18.653177] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:15.803 [2024-11-20 13:48:18.653370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:16.370 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:16.371 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:16.371 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:16.371 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:16.371 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:16.371 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:16.371 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.371 13:48:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.371 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.371 13:48:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.371 13:48:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.629 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:16.629 "name": "raid_bdev1", 00:28:16.629 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:16.629 "strip_size_kb": 0, 00:28:16.629 "state": "online", 00:28:16.629 "raid_level": "raid1", 00:28:16.629 "superblock": true, 00:28:16.629 "num_base_bdevs": 4, 00:28:16.629 "num_base_bdevs_discovered": 3, 00:28:16.629 "num_base_bdevs_operational": 3, 00:28:16.629 "base_bdevs_list": [ 00:28:16.629 { 00:28:16.629 "name": "spare", 00:28:16.629 "uuid": "08e6945d-f091-5770-8bfd-b2ed0d395943", 00:28:16.629 "is_configured": true, 00:28:16.629 "data_offset": 2048, 00:28:16.629 "data_size": 63488 00:28:16.629 }, 00:28:16.629 { 00:28:16.629 "name": null, 00:28:16.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.629 "is_configured": false, 00:28:16.629 "data_offset": 0, 00:28:16.629 "data_size": 63488 00:28:16.629 }, 00:28:16.629 { 00:28:16.629 "name": "BaseBdev3", 00:28:16.629 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:16.629 "is_configured": true, 00:28:16.630 "data_offset": 2048, 00:28:16.630 "data_size": 63488 00:28:16.630 }, 00:28:16.630 { 00:28:16.630 "name": "BaseBdev4", 00:28:16.630 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:16.630 "is_configured": true, 00:28:16.630 "data_offset": 2048, 00:28:16.630 "data_size": 63488 00:28:16.630 } 00:28:16.630 ] 00:28:16.630 }' 00:28:16.630 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:16.630 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:16.630 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:16.630 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:28:16.630 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:28:16.630 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:16.630 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:16.630 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:16.630 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:16.630 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:16.630 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.630 13:48:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.630 13:48:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.630 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.630 13:48:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.630 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:16.630 "name": "raid_bdev1", 00:28:16.630 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:16.630 "strip_size_kb": 0, 00:28:16.630 "state": "online", 00:28:16.630 "raid_level": "raid1", 00:28:16.630 "superblock": true, 00:28:16.630 "num_base_bdevs": 4, 00:28:16.630 "num_base_bdevs_discovered": 3, 00:28:16.630 "num_base_bdevs_operational": 3, 00:28:16.630 "base_bdevs_list": [ 00:28:16.630 { 00:28:16.630 "name": "spare", 00:28:16.630 "uuid": "08e6945d-f091-5770-8bfd-b2ed0d395943", 00:28:16.630 "is_configured": true, 00:28:16.630 "data_offset": 2048, 00:28:16.630 "data_size": 63488 00:28:16.630 }, 00:28:16.630 { 00:28:16.630 "name": null, 00:28:16.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.630 "is_configured": false, 00:28:16.630 "data_offset": 0, 00:28:16.630 "data_size": 63488 00:28:16.630 }, 00:28:16.630 { 00:28:16.630 "name": "BaseBdev3", 00:28:16.630 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:16.630 "is_configured": true, 00:28:16.630 "data_offset": 2048, 00:28:16.630 "data_size": 63488 00:28:16.630 }, 00:28:16.630 { 00:28:16.630 "name": "BaseBdev4", 00:28:16.630 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:16.630 "is_configured": true, 00:28:16.630 "data_offset": 2048, 00:28:16.630 "data_size": 63488 00:28:16.630 } 00:28:16.630 ] 00:28:16.630 }' 00:28:16.630 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:16.630 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:16.630 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:16.904 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:16.904 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:16.904 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:16.904 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:16.904 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:16.904 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:16.904 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:16.904 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:16.904 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:16.904 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:16.904 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:16.904 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.904 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.904 13:48:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.904 13:48:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.904 13:48:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.904 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:16.904 "name": "raid_bdev1", 00:28:16.904 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:16.904 "strip_size_kb": 0, 00:28:16.904 "state": "online", 00:28:16.904 "raid_level": "raid1", 00:28:16.904 "superblock": true, 00:28:16.904 "num_base_bdevs": 4, 00:28:16.904 "num_base_bdevs_discovered": 3, 00:28:16.904 "num_base_bdevs_operational": 3, 00:28:16.904 "base_bdevs_list": [ 00:28:16.904 { 00:28:16.904 "name": "spare", 00:28:16.904 "uuid": "08e6945d-f091-5770-8bfd-b2ed0d395943", 00:28:16.904 "is_configured": true, 00:28:16.904 "data_offset": 2048, 00:28:16.904 "data_size": 63488 00:28:16.904 }, 00:28:16.904 { 00:28:16.904 "name": null, 00:28:16.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.904 "is_configured": false, 00:28:16.904 "data_offset": 0, 00:28:16.904 "data_size": 63488 00:28:16.904 }, 00:28:16.904 { 00:28:16.904 "name": "BaseBdev3", 00:28:16.904 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:16.904 "is_configured": true, 00:28:16.904 "data_offset": 2048, 00:28:16.904 "data_size": 63488 00:28:16.904 }, 00:28:16.904 { 00:28:16.904 "name": "BaseBdev4", 00:28:16.904 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:16.904 "is_configured": true, 00:28:16.904 "data_offset": 2048, 00:28:16.904 "data_size": 63488 00:28:16.904 } 00:28:16.904 ] 00:28:16.904 }' 00:28:16.904 13:48:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:16.904 13:48:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.474 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:17.474 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.474 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.474 [2024-11-20 13:48:20.092171] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:17.474 [2024-11-20 13:48:20.092408] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:17.474 [2024-11-20 13:48:20.092647] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:17.474 [2024-11-20 13:48:20.092780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:17.474 [2024-11-20 13:48:20.092831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:17.474 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.474 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:17.474 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:28:17.474 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.474 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.474 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.474 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:28:17.474 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:28:17.474 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:28:17.474 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:17.474 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:17.474 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:17.474 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:17.474 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:17.474 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:17.474 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:28:17.474 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:17.474 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:17.474 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:17.801 /dev/nbd0 00:28:17.801 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:17.801 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:17.801 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:17.801 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:28:17.801 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:17.801 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:17.801 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:17.801 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:28:17.801 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:17.801 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:17.801 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:17.801 1+0 records in 00:28:17.801 1+0 records out 00:28:17.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601269 s, 6.8 MB/s 00:28:17.801 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:17.801 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:28:17.801 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:17.801 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:17.801 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:28:17.801 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:17.801 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:17.801 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:28:18.062 /dev/nbd1 00:28:18.062 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:18.062 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:18.063 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:28:18.063 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:28:18.063 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:18.063 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:18.063 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:28:18.063 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:28:18.063 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:18.063 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:18.063 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:18.063 1+0 records in 00:28:18.063 1+0 records out 00:28:18.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311606 s, 13.1 MB/s 00:28:18.063 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:18.063 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:28:18.063 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:18.063 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:18.063 13:48:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:28:18.063 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:18.063 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:18.063 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:18.409 13:48:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:28:18.409 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:18.409 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:18.409 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:18.410 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:28:18.410 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:18.410 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:18.410 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:18.670 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:18.670 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:18.670 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:18.670 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:18.670 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:18.670 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:18.670 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:18.670 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:18.670 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:28:18.928 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:18.928 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:18.928 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:18.928 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:18.928 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:18.928 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:18.928 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:18.928 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:18.928 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:28:18.928 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:28:18.928 13:48:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.928 13:48:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.928 13:48:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.928 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:18.928 13:48:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.928 13:48:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.928 [2024-11-20 13:48:21.635447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:18.928 [2024-11-20 13:48:21.635530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:18.928 [2024-11-20 13:48:21.635570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:28:18.928 [2024-11-20 13:48:21.635585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:18.928 [2024-11-20 13:48:21.638646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:18.928 [2024-11-20 13:48:21.638695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:18.928 [2024-11-20 13:48:21.638826] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:18.928 [2024-11-20 13:48:21.638910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:18.928 [2024-11-20 13:48:21.639118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:18.928 [2024-11-20 13:48:21.639281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:18.928 spare 00:28:18.928 13:48:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.928 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:28:18.928 13:48:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.928 13:48:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.928 [2024-11-20 13:48:21.739427] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:28:18.928 [2024-11-20 13:48:21.739486] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:18.928 [2024-11-20 13:48:21.739988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:28:18.928 [2024-11-20 13:48:21.740285] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:28:18.928 [2024-11-20 13:48:21.740321] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:28:18.928 [2024-11-20 13:48:21.740585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:18.928 13:48:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.928 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:18.929 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:18.929 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:18.929 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:18.929 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:18.929 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:18.929 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:18.929 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:18.929 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:18.929 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:18.929 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:18.929 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:18.929 13:48:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.929 13:48:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.929 13:48:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.929 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:18.929 "name": "raid_bdev1", 00:28:18.929 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:18.929 "strip_size_kb": 0, 00:28:18.929 "state": "online", 00:28:18.929 "raid_level": "raid1", 00:28:18.929 "superblock": true, 00:28:18.929 "num_base_bdevs": 4, 00:28:18.929 "num_base_bdevs_discovered": 3, 00:28:18.929 "num_base_bdevs_operational": 3, 00:28:18.929 "base_bdevs_list": [ 00:28:18.929 { 00:28:18.929 "name": "spare", 00:28:18.929 "uuid": "08e6945d-f091-5770-8bfd-b2ed0d395943", 00:28:18.929 "is_configured": true, 00:28:18.929 "data_offset": 2048, 00:28:18.929 "data_size": 63488 00:28:18.929 }, 00:28:18.929 { 00:28:18.929 "name": null, 00:28:18.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:18.929 "is_configured": false, 00:28:18.929 "data_offset": 2048, 00:28:18.929 "data_size": 63488 00:28:18.929 }, 00:28:18.929 { 00:28:18.929 "name": "BaseBdev3", 00:28:18.929 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:18.929 "is_configured": true, 00:28:18.929 "data_offset": 2048, 00:28:18.929 "data_size": 63488 00:28:18.929 }, 00:28:18.929 { 00:28:18.929 "name": "BaseBdev4", 00:28:18.929 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:18.929 "is_configured": true, 00:28:18.929 "data_offset": 2048, 00:28:18.929 "data_size": 63488 00:28:18.929 } 00:28:18.929 ] 00:28:18.929 }' 00:28:18.929 13:48:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:18.929 13:48:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.496 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:19.496 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:19.496 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:19.496 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:19.496 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:19.496 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.496 13:48:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.496 13:48:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.496 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.496 13:48:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.496 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:19.496 "name": "raid_bdev1", 00:28:19.496 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:19.496 "strip_size_kb": 0, 00:28:19.496 "state": "online", 00:28:19.496 "raid_level": "raid1", 00:28:19.496 "superblock": true, 00:28:19.496 "num_base_bdevs": 4, 00:28:19.496 "num_base_bdevs_discovered": 3, 00:28:19.496 "num_base_bdevs_operational": 3, 00:28:19.496 "base_bdevs_list": [ 00:28:19.496 { 00:28:19.496 "name": "spare", 00:28:19.496 "uuid": "08e6945d-f091-5770-8bfd-b2ed0d395943", 00:28:19.496 "is_configured": true, 00:28:19.496 "data_offset": 2048, 00:28:19.496 "data_size": 63488 00:28:19.496 }, 00:28:19.496 { 00:28:19.496 "name": null, 00:28:19.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.496 "is_configured": false, 00:28:19.496 "data_offset": 2048, 00:28:19.496 "data_size": 63488 00:28:19.496 }, 00:28:19.496 { 00:28:19.496 "name": "BaseBdev3", 00:28:19.496 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:19.496 "is_configured": true, 00:28:19.496 "data_offset": 2048, 00:28:19.496 "data_size": 63488 00:28:19.496 }, 00:28:19.496 { 00:28:19.496 "name": "BaseBdev4", 00:28:19.496 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:19.496 "is_configured": true, 00:28:19.496 "data_offset": 2048, 00:28:19.496 "data_size": 63488 00:28:19.496 } 00:28:19.496 ] 00:28:19.496 }' 00:28:19.496 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:19.496 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:19.496 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:19.496 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:19.496 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.496 13:48:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.496 13:48:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.496 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:19.754 13:48:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.754 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:28:19.754 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:19.754 13:48:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.754 13:48:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.754 [2024-11-20 13:48:22.464760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:19.754 13:48:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.754 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:19.754 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:19.754 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:19.754 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:19.754 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:19.754 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:19.754 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:19.754 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:19.754 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:19.754 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:19.754 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.754 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.754 13:48:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.755 13:48:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.755 13:48:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.755 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:19.755 "name": "raid_bdev1", 00:28:19.755 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:19.755 "strip_size_kb": 0, 00:28:19.755 "state": "online", 00:28:19.755 "raid_level": "raid1", 00:28:19.755 "superblock": true, 00:28:19.755 "num_base_bdevs": 4, 00:28:19.755 "num_base_bdevs_discovered": 2, 00:28:19.755 "num_base_bdevs_operational": 2, 00:28:19.755 "base_bdevs_list": [ 00:28:19.755 { 00:28:19.755 "name": null, 00:28:19.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.755 "is_configured": false, 00:28:19.755 "data_offset": 0, 00:28:19.755 "data_size": 63488 00:28:19.755 }, 00:28:19.755 { 00:28:19.755 "name": null, 00:28:19.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.755 "is_configured": false, 00:28:19.755 "data_offset": 2048, 00:28:19.755 "data_size": 63488 00:28:19.755 }, 00:28:19.755 { 00:28:19.755 "name": "BaseBdev3", 00:28:19.755 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:19.755 "is_configured": true, 00:28:19.755 "data_offset": 2048, 00:28:19.755 "data_size": 63488 00:28:19.755 }, 00:28:19.755 { 00:28:19.755 "name": "BaseBdev4", 00:28:19.755 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:19.755 "is_configured": true, 00:28:19.755 "data_offset": 2048, 00:28:19.755 "data_size": 63488 00:28:19.755 } 00:28:19.755 ] 00:28:19.755 }' 00:28:19.755 13:48:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:19.755 13:48:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.321 13:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:20.321 13:48:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.321 13:48:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.321 [2024-11-20 13:48:23.016977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:20.321 [2024-11-20 13:48:23.017393] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:28:20.321 [2024-11-20 13:48:23.017436] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:20.321 [2024-11-20 13:48:23.017492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:20.321 [2024-11-20 13:48:23.031431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:28:20.321 13:48:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.321 13:48:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:28:20.321 [2024-11-20 13:48:23.034154] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:21.255 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:21.255 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:21.255 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:21.255 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:21.255 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:21.255 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:21.255 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:21.255 13:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.255 13:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.255 13:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.255 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:21.255 "name": "raid_bdev1", 00:28:21.255 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:21.255 "strip_size_kb": 0, 00:28:21.255 "state": "online", 00:28:21.255 "raid_level": "raid1", 00:28:21.255 "superblock": true, 00:28:21.255 "num_base_bdevs": 4, 00:28:21.255 "num_base_bdevs_discovered": 3, 00:28:21.255 "num_base_bdevs_operational": 3, 00:28:21.255 "process": { 00:28:21.255 "type": "rebuild", 00:28:21.255 "target": "spare", 00:28:21.255 "progress": { 00:28:21.255 "blocks": 20480, 00:28:21.255 "percent": 32 00:28:21.255 } 00:28:21.255 }, 00:28:21.255 "base_bdevs_list": [ 00:28:21.255 { 00:28:21.255 "name": "spare", 00:28:21.255 "uuid": "08e6945d-f091-5770-8bfd-b2ed0d395943", 00:28:21.255 "is_configured": true, 00:28:21.255 "data_offset": 2048, 00:28:21.255 "data_size": 63488 00:28:21.255 }, 00:28:21.255 { 00:28:21.255 "name": null, 00:28:21.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:21.255 "is_configured": false, 00:28:21.255 "data_offset": 2048, 00:28:21.255 "data_size": 63488 00:28:21.255 }, 00:28:21.255 { 00:28:21.255 "name": "BaseBdev3", 00:28:21.255 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:21.255 "is_configured": true, 00:28:21.255 "data_offset": 2048, 00:28:21.255 "data_size": 63488 00:28:21.255 }, 00:28:21.255 { 00:28:21.255 "name": "BaseBdev4", 00:28:21.255 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:21.255 "is_configured": true, 00:28:21.255 "data_offset": 2048, 00:28:21.256 "data_size": 63488 00:28:21.256 } 00:28:21.256 ] 00:28:21.256 }' 00:28:21.256 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:21.256 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:21.256 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:21.514 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:21.514 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:28:21.514 13:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.514 13:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.514 [2024-11-20 13:48:24.195848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:21.514 [2024-11-20 13:48:24.243775] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:21.514 [2024-11-20 13:48:24.244180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:21.514 [2024-11-20 13:48:24.244453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:21.514 [2024-11-20 13:48:24.244509] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:21.514 13:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.515 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:21.515 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:21.515 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:21.515 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:21.515 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:21.515 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:21.515 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:21.515 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:21.515 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:21.515 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:21.515 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:21.515 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:21.515 13:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.515 13:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.515 13:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.515 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:21.515 "name": "raid_bdev1", 00:28:21.515 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:21.515 "strip_size_kb": 0, 00:28:21.515 "state": "online", 00:28:21.515 "raid_level": "raid1", 00:28:21.515 "superblock": true, 00:28:21.515 "num_base_bdevs": 4, 00:28:21.515 "num_base_bdevs_discovered": 2, 00:28:21.515 "num_base_bdevs_operational": 2, 00:28:21.515 "base_bdevs_list": [ 00:28:21.515 { 00:28:21.515 "name": null, 00:28:21.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:21.515 "is_configured": false, 00:28:21.515 "data_offset": 0, 00:28:21.515 "data_size": 63488 00:28:21.515 }, 00:28:21.515 { 00:28:21.515 "name": null, 00:28:21.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:21.515 "is_configured": false, 00:28:21.515 "data_offset": 2048, 00:28:21.515 "data_size": 63488 00:28:21.515 }, 00:28:21.515 { 00:28:21.515 "name": "BaseBdev3", 00:28:21.515 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:21.515 "is_configured": true, 00:28:21.515 "data_offset": 2048, 00:28:21.515 "data_size": 63488 00:28:21.515 }, 00:28:21.515 { 00:28:21.515 "name": "BaseBdev4", 00:28:21.515 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:21.515 "is_configured": true, 00:28:21.515 "data_offset": 2048, 00:28:21.515 "data_size": 63488 00:28:21.515 } 00:28:21.515 ] 00:28:21.515 }' 00:28:21.515 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:21.515 13:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.081 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:22.081 13:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.081 13:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.081 [2024-11-20 13:48:24.781509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:22.081 [2024-11-20 13:48:24.781634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:22.081 [2024-11-20 13:48:24.781679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:28:22.081 [2024-11-20 13:48:24.781695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:22.081 [2024-11-20 13:48:24.782334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:22.081 [2024-11-20 13:48:24.782377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:22.081 [2024-11-20 13:48:24.782519] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:22.081 [2024-11-20 13:48:24.782541] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:28:22.081 [2024-11-20 13:48:24.782558] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:22.081 [2024-11-20 13:48:24.782601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:22.081 [2024-11-20 13:48:24.796340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:28:22.081 spare 00:28:22.081 13:48:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.081 13:48:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:28:22.081 [2024-11-20 13:48:24.799077] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:23.016 13:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:23.016 13:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:23.016 13:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:23.016 13:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:23.016 13:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:23.016 13:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:23.016 13:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.016 13:48:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.016 13:48:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:23.016 13:48:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.016 13:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:23.016 "name": "raid_bdev1", 00:28:23.016 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:23.016 "strip_size_kb": 0, 00:28:23.016 "state": "online", 00:28:23.016 "raid_level": "raid1", 00:28:23.016 "superblock": true, 00:28:23.016 "num_base_bdevs": 4, 00:28:23.016 "num_base_bdevs_discovered": 3, 00:28:23.016 "num_base_bdevs_operational": 3, 00:28:23.016 "process": { 00:28:23.016 "type": "rebuild", 00:28:23.016 "target": "spare", 00:28:23.016 "progress": { 00:28:23.016 "blocks": 20480, 00:28:23.016 "percent": 32 00:28:23.016 } 00:28:23.016 }, 00:28:23.016 "base_bdevs_list": [ 00:28:23.016 { 00:28:23.016 "name": "spare", 00:28:23.016 "uuid": "08e6945d-f091-5770-8bfd-b2ed0d395943", 00:28:23.016 "is_configured": true, 00:28:23.016 "data_offset": 2048, 00:28:23.016 "data_size": 63488 00:28:23.016 }, 00:28:23.016 { 00:28:23.016 "name": null, 00:28:23.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:23.016 "is_configured": false, 00:28:23.016 "data_offset": 2048, 00:28:23.016 "data_size": 63488 00:28:23.016 }, 00:28:23.016 { 00:28:23.016 "name": "BaseBdev3", 00:28:23.016 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:23.016 "is_configured": true, 00:28:23.016 "data_offset": 2048, 00:28:23.016 "data_size": 63488 00:28:23.016 }, 00:28:23.016 { 00:28:23.016 "name": "BaseBdev4", 00:28:23.016 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:23.016 "is_configured": true, 00:28:23.016 "data_offset": 2048, 00:28:23.016 "data_size": 63488 00:28:23.016 } 00:28:23.016 ] 00:28:23.016 }' 00:28:23.016 13:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:23.016 13:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:23.016 13:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:23.275 13:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:23.275 13:48:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:28:23.275 13:48:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.275 13:48:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:23.275 [2024-11-20 13:48:25.964813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:23.275 [2024-11-20 13:48:26.008757] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:23.275 [2024-11-20 13:48:26.008845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:23.275 [2024-11-20 13:48:26.008870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:23.275 [2024-11-20 13:48:26.008885] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:23.275 13:48:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.275 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:23.275 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:23.275 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:23.275 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:23.275 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:23.275 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:23.275 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:23.275 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:23.275 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:23.275 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:23.275 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:23.275 13:48:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.275 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.275 13:48:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:23.275 13:48:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.275 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:23.275 "name": "raid_bdev1", 00:28:23.275 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:23.275 "strip_size_kb": 0, 00:28:23.275 "state": "online", 00:28:23.275 "raid_level": "raid1", 00:28:23.275 "superblock": true, 00:28:23.275 "num_base_bdevs": 4, 00:28:23.275 "num_base_bdevs_discovered": 2, 00:28:23.275 "num_base_bdevs_operational": 2, 00:28:23.275 "base_bdevs_list": [ 00:28:23.275 { 00:28:23.275 "name": null, 00:28:23.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:23.275 "is_configured": false, 00:28:23.275 "data_offset": 0, 00:28:23.275 "data_size": 63488 00:28:23.275 }, 00:28:23.275 { 00:28:23.275 "name": null, 00:28:23.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:23.275 "is_configured": false, 00:28:23.275 "data_offset": 2048, 00:28:23.275 "data_size": 63488 00:28:23.275 }, 00:28:23.275 { 00:28:23.275 "name": "BaseBdev3", 00:28:23.275 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:23.275 "is_configured": true, 00:28:23.275 "data_offset": 2048, 00:28:23.275 "data_size": 63488 00:28:23.275 }, 00:28:23.275 { 00:28:23.275 "name": "BaseBdev4", 00:28:23.275 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:23.275 "is_configured": true, 00:28:23.275 "data_offset": 2048, 00:28:23.275 "data_size": 63488 00:28:23.275 } 00:28:23.275 ] 00:28:23.275 }' 00:28:23.275 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:23.275 13:48:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:23.842 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:23.842 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:23.842 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:23.842 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:23.842 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:23.842 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:23.842 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.842 13:48:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.842 13:48:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:23.842 13:48:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.842 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:23.842 "name": "raid_bdev1", 00:28:23.842 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:23.842 "strip_size_kb": 0, 00:28:23.842 "state": "online", 00:28:23.842 "raid_level": "raid1", 00:28:23.842 "superblock": true, 00:28:23.842 "num_base_bdevs": 4, 00:28:23.842 "num_base_bdevs_discovered": 2, 00:28:23.842 "num_base_bdevs_operational": 2, 00:28:23.842 "base_bdevs_list": [ 00:28:23.842 { 00:28:23.842 "name": null, 00:28:23.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:23.842 "is_configured": false, 00:28:23.842 "data_offset": 0, 00:28:23.842 "data_size": 63488 00:28:23.842 }, 00:28:23.842 { 00:28:23.842 "name": null, 00:28:23.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:23.842 "is_configured": false, 00:28:23.842 "data_offset": 2048, 00:28:23.842 "data_size": 63488 00:28:23.842 }, 00:28:23.842 { 00:28:23.842 "name": "BaseBdev3", 00:28:23.842 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:23.842 "is_configured": true, 00:28:23.842 "data_offset": 2048, 00:28:23.842 "data_size": 63488 00:28:23.842 }, 00:28:23.842 { 00:28:23.842 "name": "BaseBdev4", 00:28:23.842 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:23.842 "is_configured": true, 00:28:23.842 "data_offset": 2048, 00:28:23.842 "data_size": 63488 00:28:23.842 } 00:28:23.842 ] 00:28:23.842 }' 00:28:23.842 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:23.842 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:23.842 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:23.842 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:23.842 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:28:23.842 13:48:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.842 13:48:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:23.842 13:48:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.842 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:23.842 13:48:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.842 13:48:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:23.843 [2024-11-20 13:48:26.721653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:23.843 [2024-11-20 13:48:26.721726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:23.843 [2024-11-20 13:48:26.721755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:28:23.843 [2024-11-20 13:48:26.721773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:23.843 [2024-11-20 13:48:26.722373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:23.843 [2024-11-20 13:48:26.722411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:23.843 [2024-11-20 13:48:26.722527] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:23.843 [2024-11-20 13:48:26.722555] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:28:23.843 [2024-11-20 13:48:26.722567] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:23.843 [2024-11-20 13:48:26.722603] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:28:23.843 BaseBdev1 00:28:23.843 13:48:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.843 13:48:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:28:24.821 13:48:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:24.821 13:48:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:24.821 13:48:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:24.821 13:48:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:24.821 13:48:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:24.821 13:48:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:24.821 13:48:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:24.821 13:48:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:24.821 13:48:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:24.821 13:48:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:25.080 13:48:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:25.080 13:48:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.080 13:48:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:25.080 13:48:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.080 13:48:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.080 13:48:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:25.080 "name": "raid_bdev1", 00:28:25.080 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:25.080 "strip_size_kb": 0, 00:28:25.080 "state": "online", 00:28:25.080 "raid_level": "raid1", 00:28:25.080 "superblock": true, 00:28:25.080 "num_base_bdevs": 4, 00:28:25.080 "num_base_bdevs_discovered": 2, 00:28:25.080 "num_base_bdevs_operational": 2, 00:28:25.080 "base_bdevs_list": [ 00:28:25.080 { 00:28:25.080 "name": null, 00:28:25.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:25.080 "is_configured": false, 00:28:25.080 "data_offset": 0, 00:28:25.080 "data_size": 63488 00:28:25.080 }, 00:28:25.080 { 00:28:25.080 "name": null, 00:28:25.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:25.080 "is_configured": false, 00:28:25.080 "data_offset": 2048, 00:28:25.080 "data_size": 63488 00:28:25.080 }, 00:28:25.080 { 00:28:25.080 "name": "BaseBdev3", 00:28:25.080 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:25.080 "is_configured": true, 00:28:25.080 "data_offset": 2048, 00:28:25.080 "data_size": 63488 00:28:25.080 }, 00:28:25.080 { 00:28:25.080 "name": "BaseBdev4", 00:28:25.080 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:25.080 "is_configured": true, 00:28:25.080 "data_offset": 2048, 00:28:25.080 "data_size": 63488 00:28:25.080 } 00:28:25.080 ] 00:28:25.080 }' 00:28:25.080 13:48:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:25.080 13:48:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:25.338 13:48:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:25.338 13:48:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:25.338 13:48:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:25.338 13:48:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:25.338 13:48:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:25.338 13:48:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:25.338 13:48:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.338 13:48:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:25.338 13:48:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.597 13:48:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.597 13:48:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:25.597 "name": "raid_bdev1", 00:28:25.597 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:25.597 "strip_size_kb": 0, 00:28:25.597 "state": "online", 00:28:25.597 "raid_level": "raid1", 00:28:25.597 "superblock": true, 00:28:25.597 "num_base_bdevs": 4, 00:28:25.597 "num_base_bdevs_discovered": 2, 00:28:25.597 "num_base_bdevs_operational": 2, 00:28:25.597 "base_bdevs_list": [ 00:28:25.597 { 00:28:25.597 "name": null, 00:28:25.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:25.597 "is_configured": false, 00:28:25.597 "data_offset": 0, 00:28:25.597 "data_size": 63488 00:28:25.597 }, 00:28:25.597 { 00:28:25.597 "name": null, 00:28:25.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:25.597 "is_configured": false, 00:28:25.597 "data_offset": 2048, 00:28:25.597 "data_size": 63488 00:28:25.597 }, 00:28:25.597 { 00:28:25.597 "name": "BaseBdev3", 00:28:25.597 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:25.597 "is_configured": true, 00:28:25.597 "data_offset": 2048, 00:28:25.597 "data_size": 63488 00:28:25.597 }, 00:28:25.597 { 00:28:25.597 "name": "BaseBdev4", 00:28:25.597 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:25.597 "is_configured": true, 00:28:25.597 "data_offset": 2048, 00:28:25.597 "data_size": 63488 00:28:25.597 } 00:28:25.597 ] 00:28:25.597 }' 00:28:25.597 13:48:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:25.597 13:48:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:25.597 13:48:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:25.597 13:48:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:25.597 13:48:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:25.597 13:48:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:28:25.597 13:48:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:25.597 13:48:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:25.597 13:48:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:25.597 13:48:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:25.597 13:48:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:25.597 13:48:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:25.597 13:48:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.597 13:48:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:25.597 [2024-11-20 13:48:28.410233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:25.597 [2024-11-20 13:48:28.410543] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:28:25.597 [2024-11-20 13:48:28.410563] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:25.597 request: 00:28:25.597 { 00:28:25.597 "base_bdev": "BaseBdev1", 00:28:25.597 "raid_bdev": "raid_bdev1", 00:28:25.597 "method": "bdev_raid_add_base_bdev", 00:28:25.597 "req_id": 1 00:28:25.597 } 00:28:25.597 Got JSON-RPC error response 00:28:25.597 response: 00:28:25.597 { 00:28:25.597 "code": -22, 00:28:25.597 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:25.597 } 00:28:25.597 13:48:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:25.597 13:48:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:28:25.597 13:48:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:25.597 13:48:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:25.597 13:48:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:25.597 13:48:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:28:26.531 13:48:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:26.531 13:48:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:26.531 13:48:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:26.531 13:48:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:26.531 13:48:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:26.531 13:48:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:26.531 13:48:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:26.531 13:48:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:26.531 13:48:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:26.531 13:48:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:26.531 13:48:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:26.531 13:48:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.531 13:48:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.531 13:48:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:26.531 13:48:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.789 13:48:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:26.789 "name": "raid_bdev1", 00:28:26.789 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:26.789 "strip_size_kb": 0, 00:28:26.789 "state": "online", 00:28:26.789 "raid_level": "raid1", 00:28:26.789 "superblock": true, 00:28:26.789 "num_base_bdevs": 4, 00:28:26.789 "num_base_bdevs_discovered": 2, 00:28:26.789 "num_base_bdevs_operational": 2, 00:28:26.789 "base_bdevs_list": [ 00:28:26.789 { 00:28:26.789 "name": null, 00:28:26.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.789 "is_configured": false, 00:28:26.789 "data_offset": 0, 00:28:26.789 "data_size": 63488 00:28:26.789 }, 00:28:26.789 { 00:28:26.789 "name": null, 00:28:26.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.789 "is_configured": false, 00:28:26.789 "data_offset": 2048, 00:28:26.789 "data_size": 63488 00:28:26.789 }, 00:28:26.789 { 00:28:26.789 "name": "BaseBdev3", 00:28:26.789 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:26.789 "is_configured": true, 00:28:26.789 "data_offset": 2048, 00:28:26.789 "data_size": 63488 00:28:26.789 }, 00:28:26.789 { 00:28:26.789 "name": "BaseBdev4", 00:28:26.789 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:26.789 "is_configured": true, 00:28:26.789 "data_offset": 2048, 00:28:26.789 "data_size": 63488 00:28:26.789 } 00:28:26.789 ] 00:28:26.789 }' 00:28:26.789 13:48:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:26.789 13:48:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:27.048 13:48:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:27.048 13:48:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:27.048 13:48:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:27.048 13:48:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:27.048 13:48:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:27.307 13:48:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:27.307 13:48:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:27.307 13:48:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.307 13:48:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:27.307 13:48:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.307 13:48:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:27.307 "name": "raid_bdev1", 00:28:27.307 "uuid": "f7993c47-e2c2-4942-8321-65f6c775de1d", 00:28:27.307 "strip_size_kb": 0, 00:28:27.307 "state": "online", 00:28:27.307 "raid_level": "raid1", 00:28:27.307 "superblock": true, 00:28:27.307 "num_base_bdevs": 4, 00:28:27.307 "num_base_bdevs_discovered": 2, 00:28:27.307 "num_base_bdevs_operational": 2, 00:28:27.307 "base_bdevs_list": [ 00:28:27.307 { 00:28:27.307 "name": null, 00:28:27.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:27.307 "is_configured": false, 00:28:27.307 "data_offset": 0, 00:28:27.307 "data_size": 63488 00:28:27.307 }, 00:28:27.307 { 00:28:27.307 "name": null, 00:28:27.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:27.307 "is_configured": false, 00:28:27.307 "data_offset": 2048, 00:28:27.307 "data_size": 63488 00:28:27.307 }, 00:28:27.307 { 00:28:27.307 "name": "BaseBdev3", 00:28:27.307 "uuid": "aeca8037-9428-5e23-84b0-e783d0c16e6e", 00:28:27.307 "is_configured": true, 00:28:27.307 "data_offset": 2048, 00:28:27.307 "data_size": 63488 00:28:27.307 }, 00:28:27.307 { 00:28:27.307 "name": "BaseBdev4", 00:28:27.307 "uuid": "386a6ee8-5d19-50e6-8367-587001277dac", 00:28:27.307 "is_configured": true, 00:28:27.307 "data_offset": 2048, 00:28:27.307 "data_size": 63488 00:28:27.307 } 00:28:27.307 ] 00:28:27.307 }' 00:28:27.307 13:48:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:27.307 13:48:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:27.307 13:48:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:27.307 13:48:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:27.307 13:48:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78483 00:28:27.307 13:48:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78483 ']' 00:28:27.307 13:48:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78483 00:28:27.307 13:48:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:28:27.307 13:48:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:27.307 13:48:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78483 00:28:27.307 killing process with pid 78483 00:28:27.307 Received shutdown signal, test time was about 60.000000 seconds 00:28:27.307 00:28:27.307 Latency(us) 00:28:27.307 [2024-11-20T13:48:30.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.307 [2024-11-20T13:48:30.224Z] =================================================================================================================== 00:28:27.307 [2024-11-20T13:48:30.224Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:27.307 13:48:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:27.307 13:48:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:27.307 13:48:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78483' 00:28:27.307 13:48:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78483 00:28:27.307 13:48:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78483 00:28:27.307 [2024-11-20 13:48:30.155118] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:27.307 [2024-11-20 13:48:30.155278] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:27.307 [2024-11-20 13:48:30.155379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:27.307 [2024-11-20 13:48:30.155402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:28:27.874 [2024-11-20 13:48:30.606085] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:28:28.809 00:28:28.809 real 0m29.945s 00:28:28.809 user 0m36.500s 00:28:28.809 sys 0m4.409s 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:28.809 ************************************ 00:28:28.809 END TEST raid_rebuild_test_sb 00:28:28.809 ************************************ 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:28.809 13:48:31 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:28:28.809 13:48:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:28:28.809 13:48:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:28.809 13:48:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:28.809 ************************************ 00:28:28.809 START TEST raid_rebuild_test_io 00:28:28.809 ************************************ 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79281 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79281 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79281 ']' 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:28.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:28.809 13:48:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:29.068 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:29.068 Zero copy mechanism will not be used. 00:28:29.068 [2024-11-20 13:48:31.837559] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:28:29.068 [2024-11-20 13:48:31.837764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79281 ] 00:28:29.326 [2024-11-20 13:48:32.018833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.326 [2024-11-20 13:48:32.152422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.584 [2024-11-20 13:48:32.356028] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:29.584 [2024-11-20 13:48:32.356107] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:30.152 13:48:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:30.152 13:48:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:28:30.152 13:48:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:30.153 BaseBdev1_malloc 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:30.153 [2024-11-20 13:48:32.848819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:30.153 [2024-11-20 13:48:32.849053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:30.153 [2024-11-20 13:48:32.849096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:30.153 [2024-11-20 13:48:32.849116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:30.153 [2024-11-20 13:48:32.852066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:30.153 [2024-11-20 13:48:32.852353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:30.153 BaseBdev1 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:30.153 BaseBdev2_malloc 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:30.153 [2024-11-20 13:48:32.902359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:30.153 [2024-11-20 13:48:32.902451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:30.153 [2024-11-20 13:48:32.902482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:30.153 [2024-11-20 13:48:32.902500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:30.153 [2024-11-20 13:48:32.905407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:30.153 [2024-11-20 13:48:32.905468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:30.153 BaseBdev2 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:30.153 BaseBdev3_malloc 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:30.153 [2024-11-20 13:48:32.964350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:28:30.153 [2024-11-20 13:48:32.964618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:30.153 [2024-11-20 13:48:32.964663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:30.153 [2024-11-20 13:48:32.964684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:30.153 [2024-11-20 13:48:32.967732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:30.153 [2024-11-20 13:48:32.967918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:30.153 BaseBdev3 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.153 13:48:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:30.153 BaseBdev4_malloc 00:28:30.153 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.153 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:28:30.153 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.153 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:30.153 [2024-11-20 13:48:33.022949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:28:30.153 [2024-11-20 13:48:33.023045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:30.153 [2024-11-20 13:48:33.023077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:28:30.153 [2024-11-20 13:48:33.023096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:30.153 [2024-11-20 13:48:33.026315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:30.153 [2024-11-20 13:48:33.026391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:30.153 BaseBdev4 00:28:30.153 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.153 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:28:30.153 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.153 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:30.412 spare_malloc 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:30.412 spare_delay 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:30.412 [2024-11-20 13:48:33.087954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:30.412 [2024-11-20 13:48:33.088059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:30.412 [2024-11-20 13:48:33.088087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:28:30.412 [2024-11-20 13:48:33.088105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:30.412 [2024-11-20 13:48:33.090918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:30.412 [2024-11-20 13:48:33.091009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:30.412 spare 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:30.412 [2024-11-20 13:48:33.096046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:30.412 [2024-11-20 13:48:33.098460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:30.412 [2024-11-20 13:48:33.098537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:30.412 [2024-11-20 13:48:33.098610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:30.412 [2024-11-20 13:48:33.098709] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:30.412 [2024-11-20 13:48:33.098731] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:30.412 [2024-11-20 13:48:33.099075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:30.412 [2024-11-20 13:48:33.099309] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:30.412 [2024-11-20 13:48:33.099327] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:30.412 [2024-11-20 13:48:33.099494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:30.412 "name": "raid_bdev1", 00:28:30.412 "uuid": "dc7fb352-7f0c-4611-9963-3678af4771e0", 00:28:30.412 "strip_size_kb": 0, 00:28:30.412 "state": "online", 00:28:30.412 "raid_level": "raid1", 00:28:30.412 "superblock": false, 00:28:30.412 "num_base_bdevs": 4, 00:28:30.412 "num_base_bdevs_discovered": 4, 00:28:30.412 "num_base_bdevs_operational": 4, 00:28:30.412 "base_bdevs_list": [ 00:28:30.412 { 00:28:30.412 "name": "BaseBdev1", 00:28:30.412 "uuid": "bc9a811a-d871-54c4-9e67-bb0e4015cff0", 00:28:30.412 "is_configured": true, 00:28:30.412 "data_offset": 0, 00:28:30.412 "data_size": 65536 00:28:30.412 }, 00:28:30.412 { 00:28:30.412 "name": "BaseBdev2", 00:28:30.412 "uuid": "ae5d5b1e-b5ae-59c9-9a54-5e02de1cb795", 00:28:30.412 "is_configured": true, 00:28:30.412 "data_offset": 0, 00:28:30.412 "data_size": 65536 00:28:30.412 }, 00:28:30.412 { 00:28:30.412 "name": "BaseBdev3", 00:28:30.412 "uuid": "c0596ff6-6f34-5e7f-8882-c38b968e2e23", 00:28:30.412 "is_configured": true, 00:28:30.412 "data_offset": 0, 00:28:30.412 "data_size": 65536 00:28:30.412 }, 00:28:30.412 { 00:28:30.412 "name": "BaseBdev4", 00:28:30.412 "uuid": "61820f65-724f-5d46-8560-c13f406d6846", 00:28:30.412 "is_configured": true, 00:28:30.412 "data_offset": 0, 00:28:30.412 "data_size": 65536 00:28:30.412 } 00:28:30.412 ] 00:28:30.412 }' 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:30.412 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:30.978 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:28:30.978 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:30.978 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.978 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:30.978 [2024-11-20 13:48:33.617474] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:30.978 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.978 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:28:30.978 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:30.978 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:30.978 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.978 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:30.978 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.978 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:28:30.978 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:28:30.979 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:28:30.979 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:30.979 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.979 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:30.979 [2024-11-20 13:48:33.725018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:30.979 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.979 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:30.979 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:30.979 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:30.979 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:30.979 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:30.979 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:30.979 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:30.979 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:30.979 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:30.979 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:30.979 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:30.979 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.979 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.979 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:30.979 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.979 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:30.979 "name": "raid_bdev1", 00:28:30.979 "uuid": "dc7fb352-7f0c-4611-9963-3678af4771e0", 00:28:30.979 "strip_size_kb": 0, 00:28:30.979 "state": "online", 00:28:30.979 "raid_level": "raid1", 00:28:30.979 "superblock": false, 00:28:30.979 "num_base_bdevs": 4, 00:28:30.979 "num_base_bdevs_discovered": 3, 00:28:30.979 "num_base_bdevs_operational": 3, 00:28:30.979 "base_bdevs_list": [ 00:28:30.979 { 00:28:30.979 "name": null, 00:28:30.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:30.979 "is_configured": false, 00:28:30.979 "data_offset": 0, 00:28:30.979 "data_size": 65536 00:28:30.979 }, 00:28:30.979 { 00:28:30.979 "name": "BaseBdev2", 00:28:30.979 "uuid": "ae5d5b1e-b5ae-59c9-9a54-5e02de1cb795", 00:28:30.979 "is_configured": true, 00:28:30.979 "data_offset": 0, 00:28:30.979 "data_size": 65536 00:28:30.979 }, 00:28:30.979 { 00:28:30.979 "name": "BaseBdev3", 00:28:30.979 "uuid": "c0596ff6-6f34-5e7f-8882-c38b968e2e23", 00:28:30.979 "is_configured": true, 00:28:30.979 "data_offset": 0, 00:28:30.979 "data_size": 65536 00:28:30.979 }, 00:28:30.979 { 00:28:30.979 "name": "BaseBdev4", 00:28:30.979 "uuid": "61820f65-724f-5d46-8560-c13f406d6846", 00:28:30.979 "is_configured": true, 00:28:30.979 "data_offset": 0, 00:28:30.979 "data_size": 65536 00:28:30.979 } 00:28:30.979 ] 00:28:30.979 }' 00:28:30.979 13:48:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:30.979 13:48:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:30.979 [2024-11-20 13:48:33.857360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:28:30.979 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:30.979 Zero copy mechanism will not be used. 00:28:30.979 Running I/O for 60 seconds... 00:28:31.546 13:48:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:31.546 13:48:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.546 13:48:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:31.546 [2024-11-20 13:48:34.271816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:31.546 13:48:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.546 13:48:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:28:31.546 [2024-11-20 13:48:34.361521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:28:31.546 [2024-11-20 13:48:34.364410] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:31.804 [2024-11-20 13:48:34.495303] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:31.804 [2024-11-20 13:48:34.497046] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:32.064 [2024-11-20 13:48:34.762772] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:32.064 [2024-11-20 13:48:34.763674] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:32.328 143.00 IOPS, 429.00 MiB/s [2024-11-20T13:48:35.245Z] [2024-11-20 13:48:35.125625] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:32.328 [2024-11-20 13:48:35.127763] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:32.586 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:32.586 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:32.586 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:32.586 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:32.586 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:32.586 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:32.586 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:32.586 13:48:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.586 13:48:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:32.586 13:48:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.586 [2024-11-20 13:48:35.365273] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:32.586 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:32.586 "name": "raid_bdev1", 00:28:32.586 "uuid": "dc7fb352-7f0c-4611-9963-3678af4771e0", 00:28:32.586 "strip_size_kb": 0, 00:28:32.586 "state": "online", 00:28:32.586 "raid_level": "raid1", 00:28:32.586 "superblock": false, 00:28:32.586 "num_base_bdevs": 4, 00:28:32.586 "num_base_bdevs_discovered": 4, 00:28:32.586 "num_base_bdevs_operational": 4, 00:28:32.586 "process": { 00:28:32.586 "type": "rebuild", 00:28:32.586 "target": "spare", 00:28:32.586 "progress": { 00:28:32.586 "blocks": 8192, 00:28:32.586 "percent": 12 00:28:32.586 } 00:28:32.586 }, 00:28:32.586 "base_bdevs_list": [ 00:28:32.586 { 00:28:32.586 "name": "spare", 00:28:32.586 "uuid": "9c1e8c02-86c3-5bd5-ac70-6c24e27efef6", 00:28:32.586 "is_configured": true, 00:28:32.586 "data_offset": 0, 00:28:32.586 "data_size": 65536 00:28:32.586 }, 00:28:32.586 { 00:28:32.586 "name": "BaseBdev2", 00:28:32.586 "uuid": "ae5d5b1e-b5ae-59c9-9a54-5e02de1cb795", 00:28:32.586 "is_configured": true, 00:28:32.586 "data_offset": 0, 00:28:32.586 "data_size": 65536 00:28:32.586 }, 00:28:32.586 { 00:28:32.586 "name": "BaseBdev3", 00:28:32.586 "uuid": "c0596ff6-6f34-5e7f-8882-c38b968e2e23", 00:28:32.586 "is_configured": true, 00:28:32.586 "data_offset": 0, 00:28:32.586 "data_size": 65536 00:28:32.586 }, 00:28:32.586 { 00:28:32.586 "name": "BaseBdev4", 00:28:32.586 "uuid": "61820f65-724f-5d46-8560-c13f406d6846", 00:28:32.586 "is_configured": true, 00:28:32.586 "data_offset": 0, 00:28:32.586 "data_size": 65536 00:28:32.586 } 00:28:32.586 ] 00:28:32.586 }' 00:28:32.586 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:32.586 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:32.586 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:32.586 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:32.586 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:32.586 13:48:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.586 13:48:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:32.846 [2024-11-20 13:48:35.511483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:32.846 [2024-11-20 13:48:35.600242] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:32.846 [2024-11-20 13:48:35.723816] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:32.846 [2024-11-20 13:48:35.726966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:32.846 [2024-11-20 13:48:35.727139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:32.846 [2024-11-20 13:48:35.727174] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:33.104 [2024-11-20 13:48:35.760776] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:28:33.104 13:48:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.104 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:33.104 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:33.104 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:33.104 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:33.104 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:33.104 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:33.104 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:33.104 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:33.104 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:33.104 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:33.104 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:33.104 13:48:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.104 13:48:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.104 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:33.104 13:48:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.104 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:33.104 "name": "raid_bdev1", 00:28:33.104 "uuid": "dc7fb352-7f0c-4611-9963-3678af4771e0", 00:28:33.104 "strip_size_kb": 0, 00:28:33.104 "state": "online", 00:28:33.104 "raid_level": "raid1", 00:28:33.104 "superblock": false, 00:28:33.104 "num_base_bdevs": 4, 00:28:33.104 "num_base_bdevs_discovered": 3, 00:28:33.104 "num_base_bdevs_operational": 3, 00:28:33.104 "base_bdevs_list": [ 00:28:33.104 { 00:28:33.104 "name": null, 00:28:33.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:33.104 "is_configured": false, 00:28:33.104 "data_offset": 0, 00:28:33.104 "data_size": 65536 00:28:33.104 }, 00:28:33.104 { 00:28:33.104 "name": "BaseBdev2", 00:28:33.104 "uuid": "ae5d5b1e-b5ae-59c9-9a54-5e02de1cb795", 00:28:33.104 "is_configured": true, 00:28:33.104 "data_offset": 0, 00:28:33.104 "data_size": 65536 00:28:33.104 }, 00:28:33.104 { 00:28:33.104 "name": "BaseBdev3", 00:28:33.104 "uuid": "c0596ff6-6f34-5e7f-8882-c38b968e2e23", 00:28:33.104 "is_configured": true, 00:28:33.104 "data_offset": 0, 00:28:33.104 "data_size": 65536 00:28:33.104 }, 00:28:33.104 { 00:28:33.104 "name": "BaseBdev4", 00:28:33.104 "uuid": "61820f65-724f-5d46-8560-c13f406d6846", 00:28:33.104 "is_configured": true, 00:28:33.104 "data_offset": 0, 00:28:33.104 "data_size": 65536 00:28:33.104 } 00:28:33.104 ] 00:28:33.104 }' 00:28:33.104 13:48:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:33.104 13:48:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.672 126.50 IOPS, 379.50 MiB/s [2024-11-20T13:48:36.589Z] 13:48:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:33.672 13:48:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:33.672 13:48:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:33.672 13:48:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:33.672 13:48:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:33.672 13:48:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:33.672 13:48:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.672 13:48:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.672 13:48:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:33.672 13:48:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.672 13:48:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:33.672 "name": "raid_bdev1", 00:28:33.672 "uuid": "dc7fb352-7f0c-4611-9963-3678af4771e0", 00:28:33.672 "strip_size_kb": 0, 00:28:33.672 "state": "online", 00:28:33.672 "raid_level": "raid1", 00:28:33.672 "superblock": false, 00:28:33.672 "num_base_bdevs": 4, 00:28:33.672 "num_base_bdevs_discovered": 3, 00:28:33.672 "num_base_bdevs_operational": 3, 00:28:33.672 "base_bdevs_list": [ 00:28:33.672 { 00:28:33.672 "name": null, 00:28:33.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:33.672 "is_configured": false, 00:28:33.672 "data_offset": 0, 00:28:33.672 "data_size": 65536 00:28:33.672 }, 00:28:33.672 { 00:28:33.672 "name": "BaseBdev2", 00:28:33.672 "uuid": "ae5d5b1e-b5ae-59c9-9a54-5e02de1cb795", 00:28:33.672 "is_configured": true, 00:28:33.672 "data_offset": 0, 00:28:33.672 "data_size": 65536 00:28:33.672 }, 00:28:33.672 { 00:28:33.672 "name": "BaseBdev3", 00:28:33.672 "uuid": "c0596ff6-6f34-5e7f-8882-c38b968e2e23", 00:28:33.672 "is_configured": true, 00:28:33.672 "data_offset": 0, 00:28:33.672 "data_size": 65536 00:28:33.672 }, 00:28:33.672 { 00:28:33.672 "name": "BaseBdev4", 00:28:33.672 "uuid": "61820f65-724f-5d46-8560-c13f406d6846", 00:28:33.672 "is_configured": true, 00:28:33.672 "data_offset": 0, 00:28:33.672 "data_size": 65536 00:28:33.672 } 00:28:33.672 ] 00:28:33.672 }' 00:28:33.672 13:48:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:33.672 13:48:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:33.672 13:48:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:33.672 13:48:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:33.672 13:48:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:33.672 13:48:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.672 13:48:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:33.672 [2024-11-20 13:48:36.470959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:33.672 13:48:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.672 13:48:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:28:33.672 [2024-11-20 13:48:36.538037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:28:33.672 [2024-11-20 13:48:36.540924] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:33.931 [2024-11-20 13:48:36.678498] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:33.931 [2024-11-20 13:48:36.813359] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:33.931 [2024-11-20 13:48:36.814048] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:34.447 149.67 IOPS, 449.00 MiB/s [2024-11-20T13:48:37.364Z] [2024-11-20 13:48:37.172609] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:34.447 [2024-11-20 13:48:37.282227] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:34.447 [2024-11-20 13:48:37.283119] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:34.716 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:34.716 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:34.716 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:34.716 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:34.716 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:34.716 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:34.716 13:48:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.716 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:34.716 13:48:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:34.716 13:48:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.716 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:34.716 "name": "raid_bdev1", 00:28:34.716 "uuid": "dc7fb352-7f0c-4611-9963-3678af4771e0", 00:28:34.716 "strip_size_kb": 0, 00:28:34.716 "state": "online", 00:28:34.716 "raid_level": "raid1", 00:28:34.716 "superblock": false, 00:28:34.716 "num_base_bdevs": 4, 00:28:34.716 "num_base_bdevs_discovered": 4, 00:28:34.717 "num_base_bdevs_operational": 4, 00:28:34.717 "process": { 00:28:34.717 "type": "rebuild", 00:28:34.717 "target": "spare", 00:28:34.717 "progress": { 00:28:34.717 "blocks": 12288, 00:28:34.717 "percent": 18 00:28:34.717 } 00:28:34.717 }, 00:28:34.717 "base_bdevs_list": [ 00:28:34.717 { 00:28:34.717 "name": "spare", 00:28:34.717 "uuid": "9c1e8c02-86c3-5bd5-ac70-6c24e27efef6", 00:28:34.717 "is_configured": true, 00:28:34.717 "data_offset": 0, 00:28:34.717 "data_size": 65536 00:28:34.717 }, 00:28:34.717 { 00:28:34.717 "name": "BaseBdev2", 00:28:34.717 "uuid": "ae5d5b1e-b5ae-59c9-9a54-5e02de1cb795", 00:28:34.717 "is_configured": true, 00:28:34.717 "data_offset": 0, 00:28:34.717 "data_size": 65536 00:28:34.717 }, 00:28:34.717 { 00:28:34.717 "name": "BaseBdev3", 00:28:34.717 "uuid": "c0596ff6-6f34-5e7f-8882-c38b968e2e23", 00:28:34.717 "is_configured": true, 00:28:34.717 "data_offset": 0, 00:28:34.717 "data_size": 65536 00:28:34.717 }, 00:28:34.717 { 00:28:34.717 "name": "BaseBdev4", 00:28:34.717 "uuid": "61820f65-724f-5d46-8560-c13f406d6846", 00:28:34.717 "is_configured": true, 00:28:34.717 "data_offset": 0, 00:28:34.717 "data_size": 65536 00:28:34.717 } 00:28:34.717 ] 00:28:34.717 }' 00:28:34.717 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:34.977 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:34.977 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:34.977 [2024-11-20 13:48:37.641153] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:34.977 [2024-11-20 13:48:37.642961] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:34.977 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:34.977 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:28:34.977 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:28:34.977 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:28:34.977 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:28:34.977 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:28:34.977 13:48:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.978 13:48:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:34.978 [2024-11-20 13:48:37.694830] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:34.978 [2024-11-20 13:48:37.856659] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:34.978 129.25 IOPS, 387.75 MiB/s [2024-11-20T13:48:37.895Z] [2024-11-20 13:48:37.886563] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:28:34.978 [2024-11-20 13:48:37.886628] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:28:35.237 13:48:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.237 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:28:35.237 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:28:35.237 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:35.237 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:35.237 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:35.237 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:35.237 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:35.237 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:35.237 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.237 13:48:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.237 13:48:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:35.237 13:48:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.237 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:35.237 "name": "raid_bdev1", 00:28:35.237 "uuid": "dc7fb352-7f0c-4611-9963-3678af4771e0", 00:28:35.237 "strip_size_kb": 0, 00:28:35.237 "state": "online", 00:28:35.237 "raid_level": "raid1", 00:28:35.237 "superblock": false, 00:28:35.237 "num_base_bdevs": 4, 00:28:35.237 "num_base_bdevs_discovered": 3, 00:28:35.237 "num_base_bdevs_operational": 3, 00:28:35.237 "process": { 00:28:35.237 "type": "rebuild", 00:28:35.237 "target": "spare", 00:28:35.237 "progress": { 00:28:35.237 "blocks": 16384, 00:28:35.237 "percent": 25 00:28:35.237 } 00:28:35.237 }, 00:28:35.237 "base_bdevs_list": [ 00:28:35.237 { 00:28:35.237 "name": "spare", 00:28:35.237 "uuid": "9c1e8c02-86c3-5bd5-ac70-6c24e27efef6", 00:28:35.237 "is_configured": true, 00:28:35.237 "data_offset": 0, 00:28:35.237 "data_size": 65536 00:28:35.237 }, 00:28:35.237 { 00:28:35.237 "name": null, 00:28:35.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:35.237 "is_configured": false, 00:28:35.237 "data_offset": 0, 00:28:35.237 "data_size": 65536 00:28:35.237 }, 00:28:35.237 { 00:28:35.237 "name": "BaseBdev3", 00:28:35.237 "uuid": "c0596ff6-6f34-5e7f-8882-c38b968e2e23", 00:28:35.237 "is_configured": true, 00:28:35.237 "data_offset": 0, 00:28:35.237 "data_size": 65536 00:28:35.237 }, 00:28:35.237 { 00:28:35.237 "name": "BaseBdev4", 00:28:35.237 "uuid": "61820f65-724f-5d46-8560-c13f406d6846", 00:28:35.237 "is_configured": true, 00:28:35.237 "data_offset": 0, 00:28:35.237 "data_size": 65536 00:28:35.237 } 00:28:35.238 ] 00:28:35.238 }' 00:28:35.238 13:48:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:35.238 13:48:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:35.238 13:48:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:35.238 13:48:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:35.238 13:48:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=529 00:28:35.238 13:48:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:35.238 13:48:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:35.238 13:48:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:35.238 13:48:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:35.238 13:48:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:35.238 13:48:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:35.238 13:48:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:35.238 13:48:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.238 13:48:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.238 13:48:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:35.238 13:48:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.238 13:48:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:35.238 "name": "raid_bdev1", 00:28:35.238 "uuid": "dc7fb352-7f0c-4611-9963-3678af4771e0", 00:28:35.238 "strip_size_kb": 0, 00:28:35.238 "state": "online", 00:28:35.238 "raid_level": "raid1", 00:28:35.238 "superblock": false, 00:28:35.238 "num_base_bdevs": 4, 00:28:35.238 "num_base_bdevs_discovered": 3, 00:28:35.238 "num_base_bdevs_operational": 3, 00:28:35.238 "process": { 00:28:35.238 "type": "rebuild", 00:28:35.238 "target": "spare", 00:28:35.238 "progress": { 00:28:35.238 "blocks": 18432, 00:28:35.238 "percent": 28 00:28:35.238 } 00:28:35.238 }, 00:28:35.238 "base_bdevs_list": [ 00:28:35.238 { 00:28:35.238 "name": "spare", 00:28:35.238 "uuid": "9c1e8c02-86c3-5bd5-ac70-6c24e27efef6", 00:28:35.238 "is_configured": true, 00:28:35.238 "data_offset": 0, 00:28:35.238 "data_size": 65536 00:28:35.238 }, 00:28:35.238 { 00:28:35.238 "name": null, 00:28:35.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:35.238 "is_configured": false, 00:28:35.238 "data_offset": 0, 00:28:35.238 "data_size": 65536 00:28:35.238 }, 00:28:35.238 { 00:28:35.238 "name": "BaseBdev3", 00:28:35.238 "uuid": "c0596ff6-6f34-5e7f-8882-c38b968e2e23", 00:28:35.238 "is_configured": true, 00:28:35.238 "data_offset": 0, 00:28:35.238 "data_size": 65536 00:28:35.238 }, 00:28:35.238 { 00:28:35.238 "name": "BaseBdev4", 00:28:35.238 "uuid": "61820f65-724f-5d46-8560-c13f406d6846", 00:28:35.238 "is_configured": true, 00:28:35.238 "data_offset": 0, 00:28:35.238 "data_size": 65536 00:28:35.238 } 00:28:35.238 ] 00:28:35.238 }' 00:28:35.238 13:48:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:35.238 [2024-11-20 13:48:38.139784] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:35.496 13:48:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:35.497 13:48:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:35.497 13:48:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:35.497 13:48:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:35.497 [2024-11-20 13:48:38.261219] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:28:35.755 [2024-11-20 13:48:38.608524] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:28:36.583 118.40 IOPS, 355.20 MiB/s [2024-11-20T13:48:39.500Z] 13:48:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:36.583 13:48:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:36.583 13:48:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:36.583 13:48:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:36.583 13:48:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:36.583 13:48:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:36.583 13:48:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:36.583 13:48:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:36.583 13:48:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.583 13:48:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:36.583 [2024-11-20 13:48:39.272487] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:28:36.583 13:48:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.583 13:48:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:36.583 "name": "raid_bdev1", 00:28:36.583 "uuid": "dc7fb352-7f0c-4611-9963-3678af4771e0", 00:28:36.583 "strip_size_kb": 0, 00:28:36.583 "state": "online", 00:28:36.583 "raid_level": "raid1", 00:28:36.583 "superblock": false, 00:28:36.583 "num_base_bdevs": 4, 00:28:36.583 "num_base_bdevs_discovered": 3, 00:28:36.583 "num_base_bdevs_operational": 3, 00:28:36.583 "process": { 00:28:36.583 "type": "rebuild", 00:28:36.583 "target": "spare", 00:28:36.583 "progress": { 00:28:36.583 "blocks": 36864, 00:28:36.583 "percent": 56 00:28:36.583 } 00:28:36.583 }, 00:28:36.583 "base_bdevs_list": [ 00:28:36.583 { 00:28:36.583 "name": "spare", 00:28:36.583 "uuid": "9c1e8c02-86c3-5bd5-ac70-6c24e27efef6", 00:28:36.583 "is_configured": true, 00:28:36.583 "data_offset": 0, 00:28:36.583 "data_size": 65536 00:28:36.583 }, 00:28:36.583 { 00:28:36.583 "name": null, 00:28:36.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:36.583 "is_configured": false, 00:28:36.583 "data_offset": 0, 00:28:36.583 "data_size": 65536 00:28:36.583 }, 00:28:36.583 { 00:28:36.583 "name": "BaseBdev3", 00:28:36.583 "uuid": "c0596ff6-6f34-5e7f-8882-c38b968e2e23", 00:28:36.583 "is_configured": true, 00:28:36.583 "data_offset": 0, 00:28:36.583 "data_size": 65536 00:28:36.583 }, 00:28:36.583 { 00:28:36.583 "name": "BaseBdev4", 00:28:36.583 "uuid": "61820f65-724f-5d46-8560-c13f406d6846", 00:28:36.583 "is_configured": true, 00:28:36.583 "data_offset": 0, 00:28:36.583 "data_size": 65536 00:28:36.583 } 00:28:36.583 ] 00:28:36.583 }' 00:28:36.583 13:48:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:36.583 13:48:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:36.583 13:48:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:36.583 [2024-11-20 13:48:39.383089] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:28:36.583 13:48:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:36.583 13:48:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:36.842 [2024-11-20 13:48:39.724195] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:28:37.100 [2024-11-20 13:48:39.835507] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:28:37.357 105.50 IOPS, 316.50 MiB/s [2024-11-20T13:48:40.274Z] [2024-11-20 13:48:40.168326] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:28:37.616 [2024-11-20 13:48:40.281515] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:28:37.616 13:48:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:37.616 13:48:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:37.616 13:48:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:37.616 13:48:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:37.616 13:48:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:37.616 13:48:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:37.616 13:48:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:37.616 13:48:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:37.616 13:48:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.616 13:48:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:37.616 13:48:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.616 13:48:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:37.616 "name": "raid_bdev1", 00:28:37.616 "uuid": "dc7fb352-7f0c-4611-9963-3678af4771e0", 00:28:37.616 "strip_size_kb": 0, 00:28:37.616 "state": "online", 00:28:37.616 "raid_level": "raid1", 00:28:37.616 "superblock": false, 00:28:37.616 "num_base_bdevs": 4, 00:28:37.616 "num_base_bdevs_discovered": 3, 00:28:37.616 "num_base_bdevs_operational": 3, 00:28:37.616 "process": { 00:28:37.616 "type": "rebuild", 00:28:37.616 "target": "spare", 00:28:37.616 "progress": { 00:28:37.616 "blocks": 53248, 00:28:37.616 "percent": 81 00:28:37.616 } 00:28:37.616 }, 00:28:37.616 "base_bdevs_list": [ 00:28:37.616 { 00:28:37.616 "name": "spare", 00:28:37.616 "uuid": "9c1e8c02-86c3-5bd5-ac70-6c24e27efef6", 00:28:37.616 "is_configured": true, 00:28:37.616 "data_offset": 0, 00:28:37.616 "data_size": 65536 00:28:37.616 }, 00:28:37.616 { 00:28:37.616 "name": null, 00:28:37.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:37.616 "is_configured": false, 00:28:37.616 "data_offset": 0, 00:28:37.616 "data_size": 65536 00:28:37.616 }, 00:28:37.616 { 00:28:37.616 "name": "BaseBdev3", 00:28:37.616 "uuid": "c0596ff6-6f34-5e7f-8882-c38b968e2e23", 00:28:37.616 "is_configured": true, 00:28:37.616 "data_offset": 0, 00:28:37.616 "data_size": 65536 00:28:37.616 }, 00:28:37.616 { 00:28:37.616 "name": "BaseBdev4", 00:28:37.616 "uuid": "61820f65-724f-5d46-8560-c13f406d6846", 00:28:37.616 "is_configured": true, 00:28:37.616 "data_offset": 0, 00:28:37.616 "data_size": 65536 00:28:37.616 } 00:28:37.616 ] 00:28:37.616 }' 00:28:37.616 13:48:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:37.616 13:48:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:37.616 13:48:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:37.874 13:48:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:37.874 13:48:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:38.391 94.86 IOPS, 284.57 MiB/s [2024-11-20T13:48:41.308Z] [2024-11-20 13:48:41.084417] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:38.391 [2024-11-20 13:48:41.180274] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:38.391 [2024-11-20 13:48:41.183757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:38.958 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:38.958 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:38.958 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:38.958 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:38.959 "name": "raid_bdev1", 00:28:38.959 "uuid": "dc7fb352-7f0c-4611-9963-3678af4771e0", 00:28:38.959 "strip_size_kb": 0, 00:28:38.959 "state": "online", 00:28:38.959 "raid_level": "raid1", 00:28:38.959 "superblock": false, 00:28:38.959 "num_base_bdevs": 4, 00:28:38.959 "num_base_bdevs_discovered": 3, 00:28:38.959 "num_base_bdevs_operational": 3, 00:28:38.959 "base_bdevs_list": [ 00:28:38.959 { 00:28:38.959 "name": "spare", 00:28:38.959 "uuid": "9c1e8c02-86c3-5bd5-ac70-6c24e27efef6", 00:28:38.959 "is_configured": true, 00:28:38.959 "data_offset": 0, 00:28:38.959 "data_size": 65536 00:28:38.959 }, 00:28:38.959 { 00:28:38.959 "name": null, 00:28:38.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:38.959 "is_configured": false, 00:28:38.959 "data_offset": 0, 00:28:38.959 "data_size": 65536 00:28:38.959 }, 00:28:38.959 { 00:28:38.959 "name": "BaseBdev3", 00:28:38.959 "uuid": "c0596ff6-6f34-5e7f-8882-c38b968e2e23", 00:28:38.959 "is_configured": true, 00:28:38.959 "data_offset": 0, 00:28:38.959 "data_size": 65536 00:28:38.959 }, 00:28:38.959 { 00:28:38.959 "name": "BaseBdev4", 00:28:38.959 "uuid": "61820f65-724f-5d46-8560-c13f406d6846", 00:28:38.959 "is_configured": true, 00:28:38.959 "data_offset": 0, 00:28:38.959 "data_size": 65536 00:28:38.959 } 00:28:38.959 ] 00:28:38.959 }' 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:38.959 "name": "raid_bdev1", 00:28:38.959 "uuid": "dc7fb352-7f0c-4611-9963-3678af4771e0", 00:28:38.959 "strip_size_kb": 0, 00:28:38.959 "state": "online", 00:28:38.959 "raid_level": "raid1", 00:28:38.959 "superblock": false, 00:28:38.959 "num_base_bdevs": 4, 00:28:38.959 "num_base_bdevs_discovered": 3, 00:28:38.959 "num_base_bdevs_operational": 3, 00:28:38.959 "base_bdevs_list": [ 00:28:38.959 { 00:28:38.959 "name": "spare", 00:28:38.959 "uuid": "9c1e8c02-86c3-5bd5-ac70-6c24e27efef6", 00:28:38.959 "is_configured": true, 00:28:38.959 "data_offset": 0, 00:28:38.959 "data_size": 65536 00:28:38.959 }, 00:28:38.959 { 00:28:38.959 "name": null, 00:28:38.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:38.959 "is_configured": false, 00:28:38.959 "data_offset": 0, 00:28:38.959 "data_size": 65536 00:28:38.959 }, 00:28:38.959 { 00:28:38.959 "name": "BaseBdev3", 00:28:38.959 "uuid": "c0596ff6-6f34-5e7f-8882-c38b968e2e23", 00:28:38.959 "is_configured": true, 00:28:38.959 "data_offset": 0, 00:28:38.959 "data_size": 65536 00:28:38.959 }, 00:28:38.959 { 00:28:38.959 "name": "BaseBdev4", 00:28:38.959 "uuid": "61820f65-724f-5d46-8560-c13f406d6846", 00:28:38.959 "is_configured": true, 00:28:38.959 "data_offset": 0, 00:28:38.959 "data_size": 65536 00:28:38.959 } 00:28:38.959 ] 00:28:38.959 }' 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:38.959 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:39.217 88.38 IOPS, 265.12 MiB/s [2024-11-20T13:48:42.134Z] 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:39.217 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:39.218 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:39.218 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:39.218 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:39.218 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:39.218 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:39.218 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:39.218 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:39.218 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:39.218 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:39.218 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:39.218 13:48:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.218 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:39.218 13:48:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:39.218 13:48:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.218 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:39.218 "name": "raid_bdev1", 00:28:39.218 "uuid": "dc7fb352-7f0c-4611-9963-3678af4771e0", 00:28:39.218 "strip_size_kb": 0, 00:28:39.218 "state": "online", 00:28:39.218 "raid_level": "raid1", 00:28:39.218 "superblock": false, 00:28:39.218 "num_base_bdevs": 4, 00:28:39.218 "num_base_bdevs_discovered": 3, 00:28:39.218 "num_base_bdevs_operational": 3, 00:28:39.218 "base_bdevs_list": [ 00:28:39.218 { 00:28:39.218 "name": "spare", 00:28:39.218 "uuid": "9c1e8c02-86c3-5bd5-ac70-6c24e27efef6", 00:28:39.218 "is_configured": true, 00:28:39.218 "data_offset": 0, 00:28:39.218 "data_size": 65536 00:28:39.218 }, 00:28:39.218 { 00:28:39.218 "name": null, 00:28:39.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:39.218 "is_configured": false, 00:28:39.218 "data_offset": 0, 00:28:39.218 "data_size": 65536 00:28:39.218 }, 00:28:39.218 { 00:28:39.218 "name": "BaseBdev3", 00:28:39.218 "uuid": "c0596ff6-6f34-5e7f-8882-c38b968e2e23", 00:28:39.218 "is_configured": true, 00:28:39.218 "data_offset": 0, 00:28:39.218 "data_size": 65536 00:28:39.218 }, 00:28:39.218 { 00:28:39.218 "name": "BaseBdev4", 00:28:39.218 "uuid": "61820f65-724f-5d46-8560-c13f406d6846", 00:28:39.218 "is_configured": true, 00:28:39.218 "data_offset": 0, 00:28:39.218 "data_size": 65536 00:28:39.218 } 00:28:39.218 ] 00:28:39.218 }' 00:28:39.218 13:48:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:39.218 13:48:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:39.785 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:39.785 13:48:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.785 13:48:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:39.785 [2024-11-20 13:48:42.418123] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:39.785 [2024-11-20 13:48:42.418164] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:39.785 00:28:39.785 Latency(us) 00:28:39.785 [2024-11-20T13:48:42.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.786 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:28:39.786 raid_bdev1 : 8.65 84.28 252.84 0.00 0.00 16066.56 279.27 123922.62 00:28:39.786 [2024-11-20T13:48:42.703Z] =================================================================================================================== 00:28:39.786 [2024-11-20T13:48:42.703Z] Total : 84.28 252.84 0.00 0.00 16066.56 279.27 123922.62 00:28:39.786 [2024-11-20 13:48:42.530885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:39.786 [2024-11-20 13:48:42.531020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:39.786 [2024-11-20 13:48:42.531162] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:39.786 [2024-11-20 13:48:42.531189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:39.786 { 00:28:39.786 "results": [ 00:28:39.786 { 00:28:39.786 "job": "raid_bdev1", 00:28:39.786 "core_mask": "0x1", 00:28:39.786 "workload": "randrw", 00:28:39.786 "percentage": 50, 00:28:39.786 "status": "finished", 00:28:39.786 "queue_depth": 2, 00:28:39.786 "io_size": 3145728, 00:28:39.786 "runtime": 8.649886, 00:28:39.786 "iops": 84.27856737071448, 00:28:39.786 "mibps": 252.83570211214345, 00:28:39.786 "io_failed": 0, 00:28:39.786 "io_timeout": 0, 00:28:39.786 "avg_latency_us": 16066.557446065597, 00:28:39.786 "min_latency_us": 279.27272727272725, 00:28:39.786 "max_latency_us": 123922.61818181818 00:28:39.786 } 00:28:39.786 ], 00:28:39.786 "core_count": 1 00:28:39.786 } 00:28:39.786 13:48:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.786 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:39.786 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:28:39.786 13:48:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.786 13:48:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:39.786 13:48:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.786 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:28:39.786 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:28:39.786 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:28:39.786 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:28:39.786 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:39.786 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:28:39.786 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:39.786 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:39.786 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:39.786 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:28:39.786 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:39.786 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:39.786 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:28:40.045 /dev/nbd0 00:28:40.045 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:40.045 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:40.045 13:48:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:40.045 13:48:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:28:40.045 13:48:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:40.045 13:48:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:40.045 13:48:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:40.045 13:48:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:28:40.045 13:48:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:40.045 13:48:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:40.045 13:48:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:40.045 1+0 records in 00:28:40.045 1+0 records out 00:28:40.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450183 s, 9.1 MB/s 00:28:40.045 13:48:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:40.045 13:48:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:28:40.045 13:48:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:40.304 13:48:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:40.304 13:48:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:28:40.304 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:40.304 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:40.304 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:28:40.304 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:28:40.304 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:28:40.304 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:28:40.304 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:28:40.304 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:28:40.304 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:40.304 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:28:40.304 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:40.304 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:28:40.304 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:40.304 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:28:40.304 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:40.304 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:40.304 13:48:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:28:40.562 /dev/nbd1 00:28:40.562 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:40.562 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:40.562 13:48:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:28:40.562 13:48:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:28:40.562 13:48:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:40.562 13:48:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:40.562 13:48:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:28:40.562 13:48:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:28:40.562 13:48:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:40.562 13:48:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:40.562 13:48:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:40.562 1+0 records in 00:28:40.562 1+0 records out 00:28:40.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000673347 s, 6.1 MB/s 00:28:40.562 13:48:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:40.562 13:48:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:28:40.562 13:48:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:40.562 13:48:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:40.562 13:48:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:28:40.562 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:40.562 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:40.562 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:28:40.821 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:28:40.821 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:40.821 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:28:40.821 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:40.821 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:28:40.821 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:40.821 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:28:41.079 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:41.079 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:41.079 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:41.079 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:41.079 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:41.079 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:41.079 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:28:41.079 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:41.079 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:28:41.079 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:28:41.079 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:28:41.079 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:41.079 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:28:41.079 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:41.079 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:28:41.079 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:41.079 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:28:41.079 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:41.079 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:41.079 13:48:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:28:41.339 /dev/nbd1 00:28:41.339 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:41.339 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:41.339 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:28:41.339 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:28:41.339 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:41.339 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:41.339 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:28:41.339 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:28:41.339 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:41.339 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:41.339 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:41.339 1+0 records in 00:28:41.339 1+0 records out 00:28:41.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564793 s, 7.3 MB/s 00:28:41.339 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:41.339 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:28:41.339 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:41.339 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:41.339 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:28:41.339 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:41.339 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:41.339 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:28:41.598 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:28:41.598 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:41.598 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:28:41.598 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:41.598 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:28:41.598 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:41.598 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:28:41.857 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:41.857 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:41.857 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:41.857 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:41.857 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:41.857 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:41.857 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:28:41.857 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:41.857 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:28:41.857 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:41.857 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:41.857 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:41.857 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:28:41.857 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:41.857 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:42.116 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:42.116 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:42.116 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:42.116 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:42.116 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:42.116 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:42.116 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:28:42.116 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:42.116 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:28:42.116 13:48:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79281 00:28:42.116 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79281 ']' 00:28:42.116 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79281 00:28:42.116 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:28:42.116 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:42.116 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79281 00:28:42.116 killing process with pid 79281 00:28:42.116 Received shutdown signal, test time was about 11.103808 seconds 00:28:42.116 00:28:42.116 Latency(us) 00:28:42.116 [2024-11-20T13:48:45.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.116 [2024-11-20T13:48:45.033Z] =================================================================================================================== 00:28:42.116 [2024-11-20T13:48:45.033Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:42.116 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:42.116 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:42.116 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79281' 00:28:42.116 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79281 00:28:42.116 [2024-11-20 13:48:44.963998] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:42.116 13:48:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79281 00:28:42.683 [2024-11-20 13:48:45.346603] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:43.619 13:48:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:28:43.619 00:28:43.619 real 0m14.733s 00:28:43.619 user 0m19.366s 00:28:43.619 sys 0m1.963s 00:28:43.619 13:48:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:43.619 ************************************ 00:28:43.619 END TEST raid_rebuild_test_io 00:28:43.619 ************************************ 00:28:43.619 13:48:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:43.619 13:48:46 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:28:43.619 13:48:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:28:43.619 13:48:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:43.619 13:48:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:43.619 ************************************ 00:28:43.619 START TEST raid_rebuild_test_sb_io 00:28:43.619 ************************************ 00:28:43.619 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:28:43.619 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:28:43.619 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:28:43.619 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:28:43.619 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:28:43.619 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:28:43.619 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:28:43.619 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:43.619 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:28:43.619 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:43.619 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:43.619 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:28:43.619 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79700 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79700 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79700 ']' 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:43.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:43.620 13:48:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:43.878 [2024-11-20 13:48:46.606139] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:28:43.878 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:43.878 Zero copy mechanism will not be used. 00:28:43.878 [2024-11-20 13:48:46.606341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79700 ] 00:28:43.878 [2024-11-20 13:48:46.782465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.136 [2024-11-20 13:48:46.900670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.396 [2024-11-20 13:48:47.090366] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:44.396 [2024-11-20 13:48:47.090439] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:44.964 BaseBdev1_malloc 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:44.964 [2024-11-20 13:48:47.708018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:44.964 [2024-11-20 13:48:47.708095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:44.964 [2024-11-20 13:48:47.708135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:44.964 [2024-11-20 13:48:47.708154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:44.964 [2024-11-20 13:48:47.711229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:44.964 [2024-11-20 13:48:47.711288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:44.964 BaseBdev1 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:44.964 BaseBdev2_malloc 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:44.964 [2024-11-20 13:48:47.762662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:44.964 [2024-11-20 13:48:47.762745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:44.964 [2024-11-20 13:48:47.762777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:44.964 [2024-11-20 13:48:47.762796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:44.964 [2024-11-20 13:48:47.765807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:44.964 [2024-11-20 13:48:47.765858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:44.964 BaseBdev2 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:44.964 BaseBdev3_malloc 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:44.964 [2024-11-20 13:48:47.831378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:28:44.964 [2024-11-20 13:48:47.831466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:44.964 [2024-11-20 13:48:47.831496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:44.964 [2024-11-20 13:48:47.831514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:44.964 [2024-11-20 13:48:47.834390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:44.964 [2024-11-20 13:48:47.834470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:44.964 BaseBdev3 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.964 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:45.223 BaseBdev4_malloc 00:28:45.223 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.223 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:28:45.223 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.223 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:45.223 [2024-11-20 13:48:47.886627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:28:45.223 [2024-11-20 13:48:47.886707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:45.223 [2024-11-20 13:48:47.886738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:28:45.223 [2024-11-20 13:48:47.886757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:45.223 [2024-11-20 13:48:47.889653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:45.223 [2024-11-20 13:48:47.889708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:45.223 BaseBdev4 00:28:45.223 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.223 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:28:45.223 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.223 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:45.223 spare_malloc 00:28:45.223 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.223 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:45.223 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.223 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:45.223 spare_delay 00:28:45.223 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.223 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:45.223 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.223 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:45.223 [2024-11-20 13:48:47.948543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:45.223 [2024-11-20 13:48:47.948611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:45.223 [2024-11-20 13:48:47.948638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:28:45.223 [2024-11-20 13:48:47.948655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:45.223 [2024-11-20 13:48:47.951572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:45.223 [2024-11-20 13:48:47.951623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:45.223 spare 00:28:45.223 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.223 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:28:45.223 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.223 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:45.223 [2024-11-20 13:48:47.956607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:45.223 [2024-11-20 13:48:47.959186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:45.223 [2024-11-20 13:48:47.959280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:45.223 [2024-11-20 13:48:47.959359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:45.223 [2024-11-20 13:48:47.959611] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:45.223 [2024-11-20 13:48:47.959684] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:45.223 [2024-11-20 13:48:47.960031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:45.223 [2024-11-20 13:48:47.960295] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:45.224 [2024-11-20 13:48:47.960322] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:45.224 [2024-11-20 13:48:47.960587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:45.224 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.224 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:45.224 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:45.224 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:45.224 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:45.224 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:45.224 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:28:45.224 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:45.224 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:45.224 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:45.224 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:45.224 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:45.224 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:45.224 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.224 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:45.224 13:48:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.224 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:45.224 "name": "raid_bdev1", 00:28:45.224 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:28:45.224 "strip_size_kb": 0, 00:28:45.224 "state": "online", 00:28:45.224 "raid_level": "raid1", 00:28:45.224 "superblock": true, 00:28:45.224 "num_base_bdevs": 4, 00:28:45.224 "num_base_bdevs_discovered": 4, 00:28:45.224 "num_base_bdevs_operational": 4, 00:28:45.224 "base_bdevs_list": [ 00:28:45.224 { 00:28:45.224 "name": "BaseBdev1", 00:28:45.224 "uuid": "d5391cb6-ef70-5581-acf3-c37a453817ea", 00:28:45.224 "is_configured": true, 00:28:45.224 "data_offset": 2048, 00:28:45.224 "data_size": 63488 00:28:45.224 }, 00:28:45.224 { 00:28:45.224 "name": "BaseBdev2", 00:28:45.224 "uuid": "fb29ab39-f495-50d5-a923-2e7255361c03", 00:28:45.224 "is_configured": true, 00:28:45.224 "data_offset": 2048, 00:28:45.224 "data_size": 63488 00:28:45.224 }, 00:28:45.224 { 00:28:45.224 "name": "BaseBdev3", 00:28:45.224 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:28:45.224 "is_configured": true, 00:28:45.224 "data_offset": 2048, 00:28:45.224 "data_size": 63488 00:28:45.224 }, 00:28:45.224 { 00:28:45.224 "name": "BaseBdev4", 00:28:45.224 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:28:45.224 "is_configured": true, 00:28:45.224 "data_offset": 2048, 00:28:45.224 "data_size": 63488 00:28:45.224 } 00:28:45.224 ] 00:28:45.224 }' 00:28:45.224 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:45.224 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:45.820 [2024-11-20 13:48:48.489432] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:45.820 [2024-11-20 13:48:48.584860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.820 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:45.820 "name": "raid_bdev1", 00:28:45.820 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:28:45.820 "strip_size_kb": 0, 00:28:45.820 "state": "online", 00:28:45.820 "raid_level": "raid1", 00:28:45.820 "superblock": true, 00:28:45.820 "num_base_bdevs": 4, 00:28:45.820 "num_base_bdevs_discovered": 3, 00:28:45.820 "num_base_bdevs_operational": 3, 00:28:45.820 "base_bdevs_list": [ 00:28:45.820 { 00:28:45.821 "name": null, 00:28:45.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:45.821 "is_configured": false, 00:28:45.821 "data_offset": 0, 00:28:45.821 "data_size": 63488 00:28:45.821 }, 00:28:45.821 { 00:28:45.821 "name": "BaseBdev2", 00:28:45.821 "uuid": "fb29ab39-f495-50d5-a923-2e7255361c03", 00:28:45.821 "is_configured": true, 00:28:45.821 "data_offset": 2048, 00:28:45.821 "data_size": 63488 00:28:45.821 }, 00:28:45.821 { 00:28:45.821 "name": "BaseBdev3", 00:28:45.821 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:28:45.821 "is_configured": true, 00:28:45.821 "data_offset": 2048, 00:28:45.821 "data_size": 63488 00:28:45.821 }, 00:28:45.821 { 00:28:45.821 "name": "BaseBdev4", 00:28:45.821 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:28:45.821 "is_configured": true, 00:28:45.821 "data_offset": 2048, 00:28:45.821 "data_size": 63488 00:28:45.821 } 00:28:45.821 ] 00:28:45.821 }' 00:28:45.821 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:45.821 13:48:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:45.821 [2024-11-20 13:48:48.685350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:28:45.821 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:45.821 Zero copy mechanism will not be used. 00:28:45.821 Running I/O for 60 seconds... 00:28:46.387 13:48:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:46.387 13:48:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.387 13:48:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:46.387 [2024-11-20 13:48:49.140000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:46.387 13:48:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.387 13:48:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:28:46.387 [2024-11-20 13:48:49.239157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:28:46.387 [2024-11-20 13:48:49.242048] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:46.645 [2024-11-20 13:48:49.370307] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:46.645 [2024-11-20 13:48:49.372087] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:46.904 [2024-11-20 13:48:49.606275] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:46.904 [2024-11-20 13:48:49.606680] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:47.163 146.00 IOPS, 438.00 MiB/s [2024-11-20T13:48:50.080Z] [2024-11-20 13:48:49.886553] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:47.163 [2024-11-20 13:48:49.887094] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:47.421 [2024-11-20 13:48:50.108909] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:47.421 [2024-11-20 13:48:50.109870] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:47.421 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:47.421 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:47.421 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:47.421 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:47.421 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:47.421 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:47.421 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.421 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:47.421 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.421 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.421 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:47.421 "name": "raid_bdev1", 00:28:47.421 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:28:47.421 "strip_size_kb": 0, 00:28:47.421 "state": "online", 00:28:47.421 "raid_level": "raid1", 00:28:47.421 "superblock": true, 00:28:47.421 "num_base_bdevs": 4, 00:28:47.421 "num_base_bdevs_discovered": 4, 00:28:47.421 "num_base_bdevs_operational": 4, 00:28:47.421 "process": { 00:28:47.421 "type": "rebuild", 00:28:47.421 "target": "spare", 00:28:47.421 "progress": { 00:28:47.421 "blocks": 10240, 00:28:47.421 "percent": 16 00:28:47.421 } 00:28:47.421 }, 00:28:47.421 "base_bdevs_list": [ 00:28:47.421 { 00:28:47.421 "name": "spare", 00:28:47.421 "uuid": "8ee40988-16c5-5fc8-a7f0-6d51ca80159a", 00:28:47.421 "is_configured": true, 00:28:47.421 "data_offset": 2048, 00:28:47.421 "data_size": 63488 00:28:47.421 }, 00:28:47.421 { 00:28:47.421 "name": "BaseBdev2", 00:28:47.421 "uuid": "fb29ab39-f495-50d5-a923-2e7255361c03", 00:28:47.421 "is_configured": true, 00:28:47.421 "data_offset": 2048, 00:28:47.422 "data_size": 63488 00:28:47.422 }, 00:28:47.422 { 00:28:47.422 "name": "BaseBdev3", 00:28:47.422 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:28:47.422 "is_configured": true, 00:28:47.422 "data_offset": 2048, 00:28:47.422 "data_size": 63488 00:28:47.422 }, 00:28:47.422 { 00:28:47.422 "name": "BaseBdev4", 00:28:47.422 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:28:47.422 "is_configured": true, 00:28:47.422 "data_offset": 2048, 00:28:47.422 "data_size": 63488 00:28:47.422 } 00:28:47.422 ] 00:28:47.422 }' 00:28:47.422 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:47.422 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:47.422 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:47.680 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:47.680 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:47.680 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.680 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:47.680 [2024-11-20 13:48:50.378665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:47.680 [2024-11-20 13:48:50.478561] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:47.681 [2024-11-20 13:48:50.479233] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:47.681 [2024-11-20 13:48:50.488090] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:47.681 [2024-11-20 13:48:50.510069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:47.681 [2024-11-20 13:48:50.510121] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:47.681 [2024-11-20 13:48:50.510140] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:47.681 [2024-11-20 13:48:50.560415] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:28:47.681 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.681 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:47.681 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:47.681 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:47.681 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:47.681 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:47.681 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:47.681 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:47.681 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:47.681 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:47.681 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:47.681 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:47.681 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.681 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.681 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:47.939 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.939 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:47.939 "name": "raid_bdev1", 00:28:47.939 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:28:47.939 "strip_size_kb": 0, 00:28:47.939 "state": "online", 00:28:47.939 "raid_level": "raid1", 00:28:47.939 "superblock": true, 00:28:47.939 "num_base_bdevs": 4, 00:28:47.939 "num_base_bdevs_discovered": 3, 00:28:47.939 "num_base_bdevs_operational": 3, 00:28:47.939 "base_bdevs_list": [ 00:28:47.939 { 00:28:47.939 "name": null, 00:28:47.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:47.939 "is_configured": false, 00:28:47.939 "data_offset": 0, 00:28:47.939 "data_size": 63488 00:28:47.939 }, 00:28:47.939 { 00:28:47.940 "name": "BaseBdev2", 00:28:47.940 "uuid": "fb29ab39-f495-50d5-a923-2e7255361c03", 00:28:47.940 "is_configured": true, 00:28:47.940 "data_offset": 2048, 00:28:47.940 "data_size": 63488 00:28:47.940 }, 00:28:47.940 { 00:28:47.940 "name": "BaseBdev3", 00:28:47.940 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:28:47.940 "is_configured": true, 00:28:47.940 "data_offset": 2048, 00:28:47.940 "data_size": 63488 00:28:47.940 }, 00:28:47.940 { 00:28:47.940 "name": "BaseBdev4", 00:28:47.940 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:28:47.940 "is_configured": true, 00:28:47.940 "data_offset": 2048, 00:28:47.940 "data_size": 63488 00:28:47.940 } 00:28:47.940 ] 00:28:47.940 }' 00:28:47.940 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:47.940 13:48:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:48.199 114.00 IOPS, 342.00 MiB/s [2024-11-20T13:48:51.116Z] 13:48:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:48.199 13:48:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:48.199 13:48:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:48.199 13:48:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:48.199 13:48:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:48.199 13:48:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:48.199 13:48:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:48.199 13:48:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.199 13:48:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:48.457 13:48:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.457 13:48:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:48.457 "name": "raid_bdev1", 00:28:48.457 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:28:48.457 "strip_size_kb": 0, 00:28:48.457 "state": "online", 00:28:48.457 "raid_level": "raid1", 00:28:48.457 "superblock": true, 00:28:48.457 "num_base_bdevs": 4, 00:28:48.457 "num_base_bdevs_discovered": 3, 00:28:48.457 "num_base_bdevs_operational": 3, 00:28:48.457 "base_bdevs_list": [ 00:28:48.457 { 00:28:48.457 "name": null, 00:28:48.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:48.457 "is_configured": false, 00:28:48.457 "data_offset": 0, 00:28:48.457 "data_size": 63488 00:28:48.457 }, 00:28:48.457 { 00:28:48.457 "name": "BaseBdev2", 00:28:48.457 "uuid": "fb29ab39-f495-50d5-a923-2e7255361c03", 00:28:48.457 "is_configured": true, 00:28:48.457 "data_offset": 2048, 00:28:48.457 "data_size": 63488 00:28:48.457 }, 00:28:48.457 { 00:28:48.457 "name": "BaseBdev3", 00:28:48.457 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:28:48.457 "is_configured": true, 00:28:48.457 "data_offset": 2048, 00:28:48.457 "data_size": 63488 00:28:48.457 }, 00:28:48.457 { 00:28:48.457 "name": "BaseBdev4", 00:28:48.457 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:28:48.457 "is_configured": true, 00:28:48.457 "data_offset": 2048, 00:28:48.457 "data_size": 63488 00:28:48.457 } 00:28:48.457 ] 00:28:48.457 }' 00:28:48.457 13:48:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:48.457 13:48:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:48.457 13:48:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:48.457 13:48:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:48.457 13:48:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:48.457 13:48:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.457 13:48:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:48.457 [2024-11-20 13:48:51.283446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:48.457 13:48:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.457 13:48:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:28:48.715 [2024-11-20 13:48:51.390598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:28:48.715 [2024-11-20 13:48:51.393496] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:48.715 [2024-11-20 13:48:51.506756] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:48.715 [2024-11-20 13:48:51.507563] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:48.974 139.33 IOPS, 418.00 MiB/s [2024-11-20T13:48:51.891Z] [2024-11-20 13:48:51.726814] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:48.974 [2024-11-20 13:48:51.727166] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:49.232 [2024-11-20 13:48:52.113764] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:49.232 [2024-11-20 13:48:52.114365] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:49.491 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:49.491 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:49.491 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:49.491 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:49.491 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:49.491 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:49.491 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:49.491 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.491 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:49.491 [2024-11-20 13:48:52.361685] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:49.491 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.787 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:49.787 "name": "raid_bdev1", 00:28:49.787 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:28:49.787 "strip_size_kb": 0, 00:28:49.787 "state": "online", 00:28:49.787 "raid_level": "raid1", 00:28:49.787 "superblock": true, 00:28:49.787 "num_base_bdevs": 4, 00:28:49.787 "num_base_bdevs_discovered": 4, 00:28:49.787 "num_base_bdevs_operational": 4, 00:28:49.787 "process": { 00:28:49.787 "type": "rebuild", 00:28:49.787 "target": "spare", 00:28:49.787 "progress": { 00:28:49.787 "blocks": 14336, 00:28:49.787 "percent": 22 00:28:49.787 } 00:28:49.787 }, 00:28:49.787 "base_bdevs_list": [ 00:28:49.787 { 00:28:49.787 "name": "spare", 00:28:49.787 "uuid": "8ee40988-16c5-5fc8-a7f0-6d51ca80159a", 00:28:49.787 "is_configured": true, 00:28:49.787 "data_offset": 2048, 00:28:49.787 "data_size": 63488 00:28:49.787 }, 00:28:49.787 { 00:28:49.787 "name": "BaseBdev2", 00:28:49.787 "uuid": "fb29ab39-f495-50d5-a923-2e7255361c03", 00:28:49.787 "is_configured": true, 00:28:49.787 "data_offset": 2048, 00:28:49.787 "data_size": 63488 00:28:49.787 }, 00:28:49.787 { 00:28:49.787 "name": "BaseBdev3", 00:28:49.787 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:28:49.787 "is_configured": true, 00:28:49.787 "data_offset": 2048, 00:28:49.787 "data_size": 63488 00:28:49.787 }, 00:28:49.787 { 00:28:49.787 "name": "BaseBdev4", 00:28:49.787 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:28:49.787 "is_configured": true, 00:28:49.787 "data_offset": 2048, 00:28:49.787 "data_size": 63488 00:28:49.787 } 00:28:49.787 ] 00:28:49.787 }' 00:28:49.787 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:49.787 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:49.787 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:49.787 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:49.787 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:28:49.787 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:28:49.787 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:28:49.787 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:28:49.787 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:28:49.787 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:28:49.787 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:28:49.787 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.787 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:49.787 [2024-11-20 13:48:52.521391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:49.787 [2024-11-20 13:48:52.599595] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:50.046 123.25 IOPS, 369.75 MiB/s [2024-11-20T13:48:52.964Z] [2024-11-20 13:48:52.715570] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:28:50.047 [2024-11-20 13:48:52.715692] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:28:50.047 [2024-11-20 13:48:52.715795] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:50.047 [2024-11-20 13:48:52.719997] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:50.047 "name": "raid_bdev1", 00:28:50.047 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:28:50.047 "strip_size_kb": 0, 00:28:50.047 "state": "online", 00:28:50.047 "raid_level": "raid1", 00:28:50.047 "superblock": true, 00:28:50.047 "num_base_bdevs": 4, 00:28:50.047 "num_base_bdevs_discovered": 3, 00:28:50.047 "num_base_bdevs_operational": 3, 00:28:50.047 "process": { 00:28:50.047 "type": "rebuild", 00:28:50.047 "target": "spare", 00:28:50.047 "progress": { 00:28:50.047 "blocks": 16384, 00:28:50.047 "percent": 25 00:28:50.047 } 00:28:50.047 }, 00:28:50.047 "base_bdevs_list": [ 00:28:50.047 { 00:28:50.047 "name": "spare", 00:28:50.047 "uuid": "8ee40988-16c5-5fc8-a7f0-6d51ca80159a", 00:28:50.047 "is_configured": true, 00:28:50.047 "data_offset": 2048, 00:28:50.047 "data_size": 63488 00:28:50.047 }, 00:28:50.047 { 00:28:50.047 "name": null, 00:28:50.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:50.047 "is_configured": false, 00:28:50.047 "data_offset": 0, 00:28:50.047 "data_size": 63488 00:28:50.047 }, 00:28:50.047 { 00:28:50.047 "name": "BaseBdev3", 00:28:50.047 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:28:50.047 "is_configured": true, 00:28:50.047 "data_offset": 2048, 00:28:50.047 "data_size": 63488 00:28:50.047 }, 00:28:50.047 { 00:28:50.047 "name": "BaseBdev4", 00:28:50.047 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:28:50.047 "is_configured": true, 00:28:50.047 "data_offset": 2048, 00:28:50.047 "data_size": 63488 00:28:50.047 } 00:28:50.047 ] 00:28:50.047 }' 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=543 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:50.047 "name": "raid_bdev1", 00:28:50.047 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:28:50.047 "strip_size_kb": 0, 00:28:50.047 "state": "online", 00:28:50.047 "raid_level": "raid1", 00:28:50.047 "superblock": true, 00:28:50.047 "num_base_bdevs": 4, 00:28:50.047 "num_base_bdevs_discovered": 3, 00:28:50.047 "num_base_bdevs_operational": 3, 00:28:50.047 "process": { 00:28:50.047 "type": "rebuild", 00:28:50.047 "target": "spare", 00:28:50.047 "progress": { 00:28:50.047 "blocks": 16384, 00:28:50.047 "percent": 25 00:28:50.047 } 00:28:50.047 }, 00:28:50.047 "base_bdevs_list": [ 00:28:50.047 { 00:28:50.047 "name": "spare", 00:28:50.047 "uuid": "8ee40988-16c5-5fc8-a7f0-6d51ca80159a", 00:28:50.047 "is_configured": true, 00:28:50.047 "data_offset": 2048, 00:28:50.047 "data_size": 63488 00:28:50.047 }, 00:28:50.047 { 00:28:50.047 "name": null, 00:28:50.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:50.047 "is_configured": false, 00:28:50.047 "data_offset": 0, 00:28:50.047 "data_size": 63488 00:28:50.047 }, 00:28:50.047 { 00:28:50.047 "name": "BaseBdev3", 00:28:50.047 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:28:50.047 "is_configured": true, 00:28:50.047 "data_offset": 2048, 00:28:50.047 "data_size": 63488 00:28:50.047 }, 00:28:50.047 { 00:28:50.047 "name": "BaseBdev4", 00:28:50.047 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:28:50.047 "is_configured": true, 00:28:50.047 "data_offset": 2048, 00:28:50.047 "data_size": 63488 00:28:50.047 } 00:28:50.047 ] 00:28:50.047 }' 00:28:50.047 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:50.305 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:50.305 13:48:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:50.305 13:48:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:50.305 13:48:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:50.305 [2024-11-20 13:48:53.063937] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:50.305 [2024-11-20 13:48:53.183891] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:28:50.872 [2024-11-20 13:48:53.523538] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:28:51.129 112.00 IOPS, 336.00 MiB/s [2024-11-20T13:48:54.046Z] [2024-11-20 13:48:53.958717] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:28:51.388 13:48:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:51.388 13:48:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:51.388 13:48:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:51.388 13:48:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:51.388 13:48:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:51.388 13:48:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:51.388 13:48:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:51.388 13:48:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:51.388 13:48:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.388 13:48:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:51.388 [2024-11-20 13:48:54.070502] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:28:51.388 13:48:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.388 13:48:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:51.388 "name": "raid_bdev1", 00:28:51.388 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:28:51.388 "strip_size_kb": 0, 00:28:51.388 "state": "online", 00:28:51.388 "raid_level": "raid1", 00:28:51.388 "superblock": true, 00:28:51.388 "num_base_bdevs": 4, 00:28:51.388 "num_base_bdevs_discovered": 3, 00:28:51.388 "num_base_bdevs_operational": 3, 00:28:51.388 "process": { 00:28:51.388 "type": "rebuild", 00:28:51.388 "target": "spare", 00:28:51.388 "progress": { 00:28:51.388 "blocks": 32768, 00:28:51.388 "percent": 51 00:28:51.388 } 00:28:51.388 }, 00:28:51.388 "base_bdevs_list": [ 00:28:51.388 { 00:28:51.388 "name": "spare", 00:28:51.388 "uuid": "8ee40988-16c5-5fc8-a7f0-6d51ca80159a", 00:28:51.388 "is_configured": true, 00:28:51.388 "data_offset": 2048, 00:28:51.388 "data_size": 63488 00:28:51.388 }, 00:28:51.388 { 00:28:51.388 "name": null, 00:28:51.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:51.388 "is_configured": false, 00:28:51.388 "data_offset": 0, 00:28:51.388 "data_size": 63488 00:28:51.388 }, 00:28:51.388 { 00:28:51.388 "name": "BaseBdev3", 00:28:51.388 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:28:51.388 "is_configured": true, 00:28:51.388 "data_offset": 2048, 00:28:51.388 "data_size": 63488 00:28:51.388 }, 00:28:51.388 { 00:28:51.388 "name": "BaseBdev4", 00:28:51.388 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:28:51.388 "is_configured": true, 00:28:51.388 "data_offset": 2048, 00:28:51.388 "data_size": 63488 00:28:51.388 } 00:28:51.388 ] 00:28:51.388 }' 00:28:51.388 13:48:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:51.388 13:48:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:51.388 13:48:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:51.388 13:48:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:51.388 13:48:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:51.647 [2024-11-20 13:48:54.412379] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:28:52.163 98.50 IOPS, 295.50 MiB/s [2024-11-20T13:48:55.080Z] [2024-11-20 13:48:54.868890] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:28:52.422 [2024-11-20 13:48:55.097883] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:28:52.422 [2024-11-20 13:48:55.099230] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:28:52.422 13:48:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:52.422 13:48:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:52.422 13:48:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:52.422 13:48:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:52.422 13:48:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:52.422 13:48:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:52.422 13:48:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:52.422 13:48:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.422 13:48:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:52.422 13:48:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:52.422 13:48:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.422 13:48:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:52.422 "name": "raid_bdev1", 00:28:52.422 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:28:52.422 "strip_size_kb": 0, 00:28:52.422 "state": "online", 00:28:52.422 "raid_level": "raid1", 00:28:52.422 "superblock": true, 00:28:52.422 "num_base_bdevs": 4, 00:28:52.422 "num_base_bdevs_discovered": 3, 00:28:52.422 "num_base_bdevs_operational": 3, 00:28:52.422 "process": { 00:28:52.422 "type": "rebuild", 00:28:52.422 "target": "spare", 00:28:52.422 "progress": { 00:28:52.422 "blocks": 51200, 00:28:52.422 "percent": 80 00:28:52.422 } 00:28:52.422 }, 00:28:52.422 "base_bdevs_list": [ 00:28:52.422 { 00:28:52.422 "name": "spare", 00:28:52.422 "uuid": "8ee40988-16c5-5fc8-a7f0-6d51ca80159a", 00:28:52.422 "is_configured": true, 00:28:52.422 "data_offset": 2048, 00:28:52.422 "data_size": 63488 00:28:52.422 }, 00:28:52.422 { 00:28:52.422 "name": null, 00:28:52.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:52.422 "is_configured": false, 00:28:52.422 "data_offset": 0, 00:28:52.422 "data_size": 63488 00:28:52.422 }, 00:28:52.422 { 00:28:52.422 "name": "BaseBdev3", 00:28:52.422 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:28:52.422 "is_configured": true, 00:28:52.422 "data_offset": 2048, 00:28:52.422 "data_size": 63488 00:28:52.422 }, 00:28:52.422 { 00:28:52.422 "name": "BaseBdev4", 00:28:52.422 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:28:52.422 "is_configured": true, 00:28:52.422 "data_offset": 2048, 00:28:52.422 "data_size": 63488 00:28:52.422 } 00:28:52.422 ] 00:28:52.422 }' 00:28:52.422 13:48:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:52.681 13:48:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:52.681 13:48:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:52.681 13:48:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:52.681 13:48:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:28:53.197 88.29 IOPS, 264.86 MiB/s [2024-11-20T13:48:56.114Z] [2024-11-20 13:48:55.889829] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:53.197 [2024-11-20 13:48:55.998347] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:53.197 [2024-11-20 13:48:56.003561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:53.766 "name": "raid_bdev1", 00:28:53.766 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:28:53.766 "strip_size_kb": 0, 00:28:53.766 "state": "online", 00:28:53.766 "raid_level": "raid1", 00:28:53.766 "superblock": true, 00:28:53.766 "num_base_bdevs": 4, 00:28:53.766 "num_base_bdevs_discovered": 3, 00:28:53.766 "num_base_bdevs_operational": 3, 00:28:53.766 "base_bdevs_list": [ 00:28:53.766 { 00:28:53.766 "name": "spare", 00:28:53.766 "uuid": "8ee40988-16c5-5fc8-a7f0-6d51ca80159a", 00:28:53.766 "is_configured": true, 00:28:53.766 "data_offset": 2048, 00:28:53.766 "data_size": 63488 00:28:53.766 }, 00:28:53.766 { 00:28:53.766 "name": null, 00:28:53.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:53.766 "is_configured": false, 00:28:53.766 "data_offset": 0, 00:28:53.766 "data_size": 63488 00:28:53.766 }, 00:28:53.766 { 00:28:53.766 "name": "BaseBdev3", 00:28:53.766 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:28:53.766 "is_configured": true, 00:28:53.766 "data_offset": 2048, 00:28:53.766 "data_size": 63488 00:28:53.766 }, 00:28:53.766 { 00:28:53.766 "name": "BaseBdev4", 00:28:53.766 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:28:53.766 "is_configured": true, 00:28:53.766 "data_offset": 2048, 00:28:53.766 "data_size": 63488 00:28:53.766 } 00:28:53.766 ] 00:28:53.766 }' 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:53.766 "name": "raid_bdev1", 00:28:53.766 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:28:53.766 "strip_size_kb": 0, 00:28:53.766 "state": "online", 00:28:53.766 "raid_level": "raid1", 00:28:53.766 "superblock": true, 00:28:53.766 "num_base_bdevs": 4, 00:28:53.766 "num_base_bdevs_discovered": 3, 00:28:53.766 "num_base_bdevs_operational": 3, 00:28:53.766 "base_bdevs_list": [ 00:28:53.766 { 00:28:53.766 "name": "spare", 00:28:53.766 "uuid": "8ee40988-16c5-5fc8-a7f0-6d51ca80159a", 00:28:53.766 "is_configured": true, 00:28:53.766 "data_offset": 2048, 00:28:53.766 "data_size": 63488 00:28:53.766 }, 00:28:53.766 { 00:28:53.766 "name": null, 00:28:53.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:53.766 "is_configured": false, 00:28:53.766 "data_offset": 0, 00:28:53.766 "data_size": 63488 00:28:53.766 }, 00:28:53.766 { 00:28:53.766 "name": "BaseBdev3", 00:28:53.766 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:28:53.766 "is_configured": true, 00:28:53.766 "data_offset": 2048, 00:28:53.766 "data_size": 63488 00:28:53.766 }, 00:28:53.766 { 00:28:53.766 "name": "BaseBdev4", 00:28:53.766 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:28:53.766 "is_configured": true, 00:28:53.766 "data_offset": 2048, 00:28:53.766 "data_size": 63488 00:28:53.766 } 00:28:53.766 ] 00:28:53.766 }' 00:28:53.766 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:54.025 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:54.025 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:54.025 80.88 IOPS, 242.62 MiB/s [2024-11-20T13:48:56.942Z] 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:54.025 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:54.025 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:54.025 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:54.025 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:54.026 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:54.026 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:54.026 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:54.026 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:54.026 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:54.026 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:54.026 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:54.026 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:54.026 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.026 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:54.026 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.026 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:54.026 "name": "raid_bdev1", 00:28:54.026 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:28:54.026 "strip_size_kb": 0, 00:28:54.026 "state": "online", 00:28:54.026 "raid_level": "raid1", 00:28:54.026 "superblock": true, 00:28:54.026 "num_base_bdevs": 4, 00:28:54.026 "num_base_bdevs_discovered": 3, 00:28:54.026 "num_base_bdevs_operational": 3, 00:28:54.026 "base_bdevs_list": [ 00:28:54.026 { 00:28:54.026 "name": "spare", 00:28:54.026 "uuid": "8ee40988-16c5-5fc8-a7f0-6d51ca80159a", 00:28:54.026 "is_configured": true, 00:28:54.026 "data_offset": 2048, 00:28:54.026 "data_size": 63488 00:28:54.026 }, 00:28:54.026 { 00:28:54.026 "name": null, 00:28:54.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:54.026 "is_configured": false, 00:28:54.026 "data_offset": 0, 00:28:54.026 "data_size": 63488 00:28:54.026 }, 00:28:54.026 { 00:28:54.026 "name": "BaseBdev3", 00:28:54.026 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:28:54.026 "is_configured": true, 00:28:54.026 "data_offset": 2048, 00:28:54.026 "data_size": 63488 00:28:54.026 }, 00:28:54.026 { 00:28:54.026 "name": "BaseBdev4", 00:28:54.026 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:28:54.026 "is_configured": true, 00:28:54.026 "data_offset": 2048, 00:28:54.026 "data_size": 63488 00:28:54.026 } 00:28:54.026 ] 00:28:54.026 }' 00:28:54.026 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:54.026 13:48:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:54.594 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:54.594 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.594 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:54.594 [2024-11-20 13:48:57.282088] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:54.594 [2024-11-20 13:48:57.282126] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:54.594 00:28:54.594 Latency(us) 00:28:54.594 [2024-11-20T13:48:57.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.594 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:28:54.594 raid_bdev1 : 8.66 77.24 231.71 0.00 0.00 17027.07 283.00 115819.99 00:28:54.594 [2024-11-20T13:48:57.511Z] =================================================================================================================== 00:28:54.594 [2024-11-20T13:48:57.511Z] Total : 77.24 231.71 0.00 0.00 17027.07 283.00 115819.99 00:28:54.594 [2024-11-20 13:48:57.370965] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:54.594 [2024-11-20 13:48:57.371059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:54.594 [2024-11-20 13:48:57.371253] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:54.594 [2024-11-20 13:48:57.371269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:54.594 { 00:28:54.594 "results": [ 00:28:54.594 { 00:28:54.594 "job": "raid_bdev1", 00:28:54.594 "core_mask": "0x1", 00:28:54.594 "workload": "randrw", 00:28:54.594 "percentage": 50, 00:28:54.594 "status": "finished", 00:28:54.594 "queue_depth": 2, 00:28:54.594 "io_size": 3145728, 00:28:54.595 "runtime": 8.66186, 00:28:54.595 "iops": 77.23514349112085, 00:28:54.595 "mibps": 231.70543047336255, 00:28:54.595 "io_failed": 0, 00:28:54.595 "io_timeout": 0, 00:28:54.595 "avg_latency_us": 17027.073113194725, 00:28:54.595 "min_latency_us": 282.99636363636364, 00:28:54.595 "max_latency_us": 115819.98545454546 00:28:54.595 } 00:28:54.595 ], 00:28:54.595 "core_count": 1 00:28:54.595 } 00:28:54.595 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.595 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:54.595 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.595 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:54.595 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:28:54.595 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.595 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:28:54.595 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:28:54.595 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:28:54.595 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:28:54.595 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:54.595 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:28:54.595 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:54.595 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:54.595 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:54.595 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:28:54.595 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:54.595 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:54.595 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:28:54.853 /dev/nbd0 00:28:55.111 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:55.111 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:55.111 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:55.112 1+0 records in 00:28:55.112 1+0 records out 00:28:55.112 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396609 s, 10.3 MB/s 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:55.112 13:48:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:28:55.370 /dev/nbd1 00:28:55.370 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:55.370 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:55.370 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:28:55.370 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:28:55.370 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:55.370 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:55.370 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:28:55.370 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:28:55.370 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:55.370 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:55.370 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:55.370 1+0 records in 00:28:55.370 1+0 records out 00:28:55.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396025 s, 10.3 MB/s 00:28:55.370 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:55.370 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:28:55.370 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:55.371 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:55.371 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:28:55.371 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:55.371 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:55.371 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:55.630 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:28:55.630 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:55.630 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:28:55.630 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:55.630 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:28:55.630 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:55.630 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:28:55.938 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:55.938 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:55.938 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:55.938 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:55.938 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:55.938 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:55.938 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:28:55.938 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:55.938 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:28:55.938 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:28:55.938 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:28:55.938 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:28:55.938 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:28:55.938 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:55.938 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:28:55.938 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:55.938 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:28:55.938 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:55.938 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:55.938 13:48:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:28:56.197 /dev/nbd1 00:28:56.197 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:56.197 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:56.197 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:28:56.197 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:28:56.197 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:56.197 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:56.197 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:28:56.197 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:28:56.197 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:56.197 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:56.197 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:56.197 1+0 records in 00:28:56.197 1+0 records out 00:28:56.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306553 s, 13.4 MB/s 00:28:56.197 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:56.197 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:28:56.197 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:56.454 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:56.454 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:28:56.454 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:56.454 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:56.454 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:56.454 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:28:56.454 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:56.454 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:28:56.454 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:56.454 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:28:56.454 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:56.454 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:28:56.712 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:56.712 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:56.712 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:56.712 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:56.712 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:56.712 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:56.712 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:28:56.712 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:56.712 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:28:56.712 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:28:56.712 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:56.712 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:56.713 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:28:56.713 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:56.713 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:28:56.984 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:56.984 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:56.984 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:56.984 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:56.984 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:56.984 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:56.984 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:28:56.984 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:56.984 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:28:56.984 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:28:56.984 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.984 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:56.984 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.984 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:28:56.984 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.984 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:56.984 [2024-11-20 13:48:59.852371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:56.984 [2024-11-20 13:48:59.852440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:56.984 [2024-11-20 13:48:59.852472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:28:56.984 [2024-11-20 13:48:59.852487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:56.984 [2024-11-20 13:48:59.855750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:56.984 [2024-11-20 13:48:59.855795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:56.984 [2024-11-20 13:48:59.855935] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:56.984 [2024-11-20 13:48:59.856018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:56.984 [2024-11-20 13:48:59.856228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:56.984 [2024-11-20 13:48:59.856385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:56.984 spare 00:28:56.984 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.984 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:28:56.984 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.984 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:57.244 [2024-11-20 13:48:59.956594] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:28:57.244 [2024-11-20 13:48:59.956659] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:57.244 [2024-11-20 13:48:59.957085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:28:57.244 [2024-11-20 13:48:59.957333] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:28:57.244 [2024-11-20 13:48:59.957363] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:28:57.244 [2024-11-20 13:48:59.957577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:57.244 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.244 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:57.244 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:57.244 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:57.244 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:57.244 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:57.244 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:57.244 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:57.244 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:57.244 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:57.244 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:57.244 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:57.244 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.244 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:57.244 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:57.244 13:48:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.244 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:57.244 "name": "raid_bdev1", 00:28:57.244 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:28:57.244 "strip_size_kb": 0, 00:28:57.244 "state": "online", 00:28:57.244 "raid_level": "raid1", 00:28:57.244 "superblock": true, 00:28:57.244 "num_base_bdevs": 4, 00:28:57.244 "num_base_bdevs_discovered": 3, 00:28:57.244 "num_base_bdevs_operational": 3, 00:28:57.244 "base_bdevs_list": [ 00:28:57.244 { 00:28:57.245 "name": "spare", 00:28:57.245 "uuid": "8ee40988-16c5-5fc8-a7f0-6d51ca80159a", 00:28:57.245 "is_configured": true, 00:28:57.245 "data_offset": 2048, 00:28:57.245 "data_size": 63488 00:28:57.245 }, 00:28:57.245 { 00:28:57.245 "name": null, 00:28:57.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:57.245 "is_configured": false, 00:28:57.245 "data_offset": 2048, 00:28:57.245 "data_size": 63488 00:28:57.245 }, 00:28:57.245 { 00:28:57.245 "name": "BaseBdev3", 00:28:57.245 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:28:57.245 "is_configured": true, 00:28:57.245 "data_offset": 2048, 00:28:57.245 "data_size": 63488 00:28:57.245 }, 00:28:57.245 { 00:28:57.245 "name": "BaseBdev4", 00:28:57.245 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:28:57.245 "is_configured": true, 00:28:57.245 "data_offset": 2048, 00:28:57.245 "data_size": 63488 00:28:57.245 } 00:28:57.245 ] 00:28:57.245 }' 00:28:57.245 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:57.245 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:57.813 "name": "raid_bdev1", 00:28:57.813 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:28:57.813 "strip_size_kb": 0, 00:28:57.813 "state": "online", 00:28:57.813 "raid_level": "raid1", 00:28:57.813 "superblock": true, 00:28:57.813 "num_base_bdevs": 4, 00:28:57.813 "num_base_bdevs_discovered": 3, 00:28:57.813 "num_base_bdevs_operational": 3, 00:28:57.813 "base_bdevs_list": [ 00:28:57.813 { 00:28:57.813 "name": "spare", 00:28:57.813 "uuid": "8ee40988-16c5-5fc8-a7f0-6d51ca80159a", 00:28:57.813 "is_configured": true, 00:28:57.813 "data_offset": 2048, 00:28:57.813 "data_size": 63488 00:28:57.813 }, 00:28:57.813 { 00:28:57.813 "name": null, 00:28:57.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:57.813 "is_configured": false, 00:28:57.813 "data_offset": 2048, 00:28:57.813 "data_size": 63488 00:28:57.813 }, 00:28:57.813 { 00:28:57.813 "name": "BaseBdev3", 00:28:57.813 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:28:57.813 "is_configured": true, 00:28:57.813 "data_offset": 2048, 00:28:57.813 "data_size": 63488 00:28:57.813 }, 00:28:57.813 { 00:28:57.813 "name": "BaseBdev4", 00:28:57.813 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:28:57.813 "is_configured": true, 00:28:57.813 "data_offset": 2048, 00:28:57.813 "data_size": 63488 00:28:57.813 } 00:28:57.813 ] 00:28:57.813 }' 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.813 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:58.072 [2024-11-20 13:49:00.724999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:58.072 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.072 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:58.072 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:58.072 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:58.072 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:58.072 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:58.072 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:58.072 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:58.072 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:58.072 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:58.072 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:58.072 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:58.072 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:58.072 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.072 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:58.072 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.072 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:58.072 "name": "raid_bdev1", 00:28:58.072 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:28:58.072 "strip_size_kb": 0, 00:28:58.072 "state": "online", 00:28:58.072 "raid_level": "raid1", 00:28:58.072 "superblock": true, 00:28:58.072 "num_base_bdevs": 4, 00:28:58.072 "num_base_bdevs_discovered": 2, 00:28:58.072 "num_base_bdevs_operational": 2, 00:28:58.072 "base_bdevs_list": [ 00:28:58.072 { 00:28:58.072 "name": null, 00:28:58.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:58.072 "is_configured": false, 00:28:58.072 "data_offset": 0, 00:28:58.072 "data_size": 63488 00:28:58.072 }, 00:28:58.072 { 00:28:58.072 "name": null, 00:28:58.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:58.072 "is_configured": false, 00:28:58.072 "data_offset": 2048, 00:28:58.072 "data_size": 63488 00:28:58.072 }, 00:28:58.072 { 00:28:58.072 "name": "BaseBdev3", 00:28:58.072 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:28:58.072 "is_configured": true, 00:28:58.072 "data_offset": 2048, 00:28:58.072 "data_size": 63488 00:28:58.072 }, 00:28:58.072 { 00:28:58.072 "name": "BaseBdev4", 00:28:58.072 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:28:58.072 "is_configured": true, 00:28:58.072 "data_offset": 2048, 00:28:58.072 "data_size": 63488 00:28:58.072 } 00:28:58.072 ] 00:28:58.072 }' 00:28:58.072 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:58.072 13:49:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:58.641 13:49:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:28:58.641 13:49:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.641 13:49:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:58.641 [2024-11-20 13:49:01.297195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:58.641 [2024-11-20 13:49:01.297467] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:28:58.641 [2024-11-20 13:49:01.297505] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:58.641 [2024-11-20 13:49:01.297591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:58.641 [2024-11-20 13:49:01.312528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:28:58.641 13:49:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.641 13:49:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:28:58.641 [2024-11-20 13:49:01.315601] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:59.578 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:59.578 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:28:59.578 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:28:59.578 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:28:59.578 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:28:59.578 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:59.578 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.578 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.578 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:59.578 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.578 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:28:59.578 "name": "raid_bdev1", 00:28:59.578 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:28:59.578 "strip_size_kb": 0, 00:28:59.578 "state": "online", 00:28:59.578 "raid_level": "raid1", 00:28:59.578 "superblock": true, 00:28:59.578 "num_base_bdevs": 4, 00:28:59.578 "num_base_bdevs_discovered": 3, 00:28:59.578 "num_base_bdevs_operational": 3, 00:28:59.578 "process": { 00:28:59.578 "type": "rebuild", 00:28:59.578 "target": "spare", 00:28:59.578 "progress": { 00:28:59.578 "blocks": 20480, 00:28:59.578 "percent": 32 00:28:59.578 } 00:28:59.578 }, 00:28:59.578 "base_bdevs_list": [ 00:28:59.578 { 00:28:59.578 "name": "spare", 00:28:59.578 "uuid": "8ee40988-16c5-5fc8-a7f0-6d51ca80159a", 00:28:59.578 "is_configured": true, 00:28:59.578 "data_offset": 2048, 00:28:59.578 "data_size": 63488 00:28:59.578 }, 00:28:59.578 { 00:28:59.578 "name": null, 00:28:59.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:59.578 "is_configured": false, 00:28:59.578 "data_offset": 2048, 00:28:59.578 "data_size": 63488 00:28:59.578 }, 00:28:59.578 { 00:28:59.578 "name": "BaseBdev3", 00:28:59.578 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:28:59.578 "is_configured": true, 00:28:59.578 "data_offset": 2048, 00:28:59.578 "data_size": 63488 00:28:59.578 }, 00:28:59.578 { 00:28:59.578 "name": "BaseBdev4", 00:28:59.578 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:28:59.578 "is_configured": true, 00:28:59.578 "data_offset": 2048, 00:28:59.578 "data_size": 63488 00:28:59.578 } 00:28:59.578 ] 00:28:59.578 }' 00:28:59.578 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:28:59.578 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:59.578 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:28:59.578 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:28:59.578 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:28:59.578 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.578 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:59.837 [2024-11-20 13:49:02.493461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:59.837 [2024-11-20 13:49:02.525309] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:59.837 [2024-11-20 13:49:02.525378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:59.837 [2024-11-20 13:49:02.525403] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:59.837 [2024-11-20 13:49:02.525414] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:59.837 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.837 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:59.837 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:59.837 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:59.837 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:59.837 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:59.837 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:59.837 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:59.837 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:59.837 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:59.837 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:59.837 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:59.837 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.837 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:59.837 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.837 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.837 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:59.837 "name": "raid_bdev1", 00:28:59.837 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:28:59.837 "strip_size_kb": 0, 00:28:59.837 "state": "online", 00:28:59.837 "raid_level": "raid1", 00:28:59.837 "superblock": true, 00:28:59.837 "num_base_bdevs": 4, 00:28:59.837 "num_base_bdevs_discovered": 2, 00:28:59.837 "num_base_bdevs_operational": 2, 00:28:59.837 "base_bdevs_list": [ 00:28:59.837 { 00:28:59.837 "name": null, 00:28:59.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:59.837 "is_configured": false, 00:28:59.837 "data_offset": 0, 00:28:59.837 "data_size": 63488 00:28:59.837 }, 00:28:59.837 { 00:28:59.837 "name": null, 00:28:59.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:59.837 "is_configured": false, 00:28:59.837 "data_offset": 2048, 00:28:59.837 "data_size": 63488 00:28:59.837 }, 00:28:59.837 { 00:28:59.837 "name": "BaseBdev3", 00:28:59.837 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:28:59.837 "is_configured": true, 00:28:59.837 "data_offset": 2048, 00:28:59.837 "data_size": 63488 00:28:59.837 }, 00:28:59.837 { 00:28:59.837 "name": "BaseBdev4", 00:28:59.837 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:28:59.837 "is_configured": true, 00:28:59.837 "data_offset": 2048, 00:28:59.837 "data_size": 63488 00:28:59.837 } 00:28:59.837 ] 00:28:59.837 }' 00:28:59.837 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:59.837 13:49:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:00.404 13:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:00.404 13:49:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.404 13:49:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:00.404 [2024-11-20 13:49:03.085322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:00.404 [2024-11-20 13:49:03.085450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:00.404 [2024-11-20 13:49:03.085510] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:29:00.404 [2024-11-20 13:49:03.085531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:00.404 [2024-11-20 13:49:03.086462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:00.404 [2024-11-20 13:49:03.086515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:00.404 [2024-11-20 13:49:03.086707] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:00.404 [2024-11-20 13:49:03.086737] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:00.404 [2024-11-20 13:49:03.086761] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:00.404 [2024-11-20 13:49:03.086817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:00.404 [2024-11-20 13:49:03.105326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:29:00.404 spare 00:29:00.404 13:49:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.404 13:49:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:29:00.404 [2024-11-20 13:49:03.108718] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:01.340 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:01.340 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:01.340 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:01.340 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:01.340 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:01.340 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:01.340 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.340 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:01.340 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:01.340 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.340 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:01.340 "name": "raid_bdev1", 00:29:01.340 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:29:01.340 "strip_size_kb": 0, 00:29:01.340 "state": "online", 00:29:01.340 "raid_level": "raid1", 00:29:01.340 "superblock": true, 00:29:01.340 "num_base_bdevs": 4, 00:29:01.340 "num_base_bdevs_discovered": 3, 00:29:01.340 "num_base_bdevs_operational": 3, 00:29:01.340 "process": { 00:29:01.340 "type": "rebuild", 00:29:01.340 "target": "spare", 00:29:01.340 "progress": { 00:29:01.340 "blocks": 20480, 00:29:01.340 "percent": 32 00:29:01.340 } 00:29:01.340 }, 00:29:01.340 "base_bdevs_list": [ 00:29:01.340 { 00:29:01.340 "name": "spare", 00:29:01.340 "uuid": "8ee40988-16c5-5fc8-a7f0-6d51ca80159a", 00:29:01.340 "is_configured": true, 00:29:01.340 "data_offset": 2048, 00:29:01.340 "data_size": 63488 00:29:01.340 }, 00:29:01.340 { 00:29:01.340 "name": null, 00:29:01.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:01.340 "is_configured": false, 00:29:01.340 "data_offset": 2048, 00:29:01.340 "data_size": 63488 00:29:01.340 }, 00:29:01.340 { 00:29:01.340 "name": "BaseBdev3", 00:29:01.340 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:29:01.340 "is_configured": true, 00:29:01.340 "data_offset": 2048, 00:29:01.340 "data_size": 63488 00:29:01.340 }, 00:29:01.340 { 00:29:01.340 "name": "BaseBdev4", 00:29:01.340 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:29:01.340 "is_configured": true, 00:29:01.340 "data_offset": 2048, 00:29:01.340 "data_size": 63488 00:29:01.340 } 00:29:01.340 ] 00:29:01.340 }' 00:29:01.340 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:01.340 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:01.340 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:01.632 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:01.632 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:29:01.632 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.632 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:01.632 [2024-11-20 13:49:04.274787] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:01.632 [2024-11-20 13:49:04.322094] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:01.632 [2024-11-20 13:49:04.322386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:01.632 [2024-11-20 13:49:04.322549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:01.632 [2024-11-20 13:49:04.322621] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:01.632 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.632 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:01.632 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:01.632 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:01.632 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:01.632 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:01.632 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:01.632 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:01.632 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:01.632 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:01.632 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:01.632 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:01.632 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.632 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:01.632 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:01.632 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.632 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:01.632 "name": "raid_bdev1", 00:29:01.632 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:29:01.632 "strip_size_kb": 0, 00:29:01.632 "state": "online", 00:29:01.632 "raid_level": "raid1", 00:29:01.632 "superblock": true, 00:29:01.632 "num_base_bdevs": 4, 00:29:01.632 "num_base_bdevs_discovered": 2, 00:29:01.632 "num_base_bdevs_operational": 2, 00:29:01.632 "base_bdevs_list": [ 00:29:01.632 { 00:29:01.632 "name": null, 00:29:01.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:01.632 "is_configured": false, 00:29:01.632 "data_offset": 0, 00:29:01.632 "data_size": 63488 00:29:01.632 }, 00:29:01.632 { 00:29:01.632 "name": null, 00:29:01.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:01.632 "is_configured": false, 00:29:01.633 "data_offset": 2048, 00:29:01.633 "data_size": 63488 00:29:01.633 }, 00:29:01.633 { 00:29:01.633 "name": "BaseBdev3", 00:29:01.633 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:29:01.633 "is_configured": true, 00:29:01.633 "data_offset": 2048, 00:29:01.633 "data_size": 63488 00:29:01.633 }, 00:29:01.633 { 00:29:01.633 "name": "BaseBdev4", 00:29:01.633 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:29:01.633 "is_configured": true, 00:29:01.633 "data_offset": 2048, 00:29:01.633 "data_size": 63488 00:29:01.633 } 00:29:01.633 ] 00:29:01.633 }' 00:29:01.633 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:01.633 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:02.201 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:02.201 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:02.201 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:02.201 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:02.201 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:02.201 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:02.201 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.201 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:02.201 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:02.201 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.201 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:02.201 "name": "raid_bdev1", 00:29:02.201 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:29:02.201 "strip_size_kb": 0, 00:29:02.201 "state": "online", 00:29:02.201 "raid_level": "raid1", 00:29:02.201 "superblock": true, 00:29:02.201 "num_base_bdevs": 4, 00:29:02.201 "num_base_bdevs_discovered": 2, 00:29:02.201 "num_base_bdevs_operational": 2, 00:29:02.201 "base_bdevs_list": [ 00:29:02.201 { 00:29:02.201 "name": null, 00:29:02.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.201 "is_configured": false, 00:29:02.201 "data_offset": 0, 00:29:02.201 "data_size": 63488 00:29:02.201 }, 00:29:02.201 { 00:29:02.201 "name": null, 00:29:02.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.201 "is_configured": false, 00:29:02.201 "data_offset": 2048, 00:29:02.201 "data_size": 63488 00:29:02.201 }, 00:29:02.201 { 00:29:02.201 "name": "BaseBdev3", 00:29:02.201 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:29:02.201 "is_configured": true, 00:29:02.201 "data_offset": 2048, 00:29:02.201 "data_size": 63488 00:29:02.201 }, 00:29:02.201 { 00:29:02.201 "name": "BaseBdev4", 00:29:02.201 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:29:02.201 "is_configured": true, 00:29:02.201 "data_offset": 2048, 00:29:02.201 "data_size": 63488 00:29:02.201 } 00:29:02.201 ] 00:29:02.201 }' 00:29:02.201 13:49:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:02.201 13:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:02.201 13:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:02.201 13:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:02.201 13:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:29:02.201 13:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.201 13:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:02.201 13:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.201 13:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:02.202 13:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.202 13:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:02.202 [2024-11-20 13:49:05.076557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:02.202 [2024-11-20 13:49:05.076998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:02.202 [2024-11-20 13:49:05.077036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:29:02.202 [2024-11-20 13:49:05.077055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:02.202 [2024-11-20 13:49:05.077695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:02.202 [2024-11-20 13:49:05.077730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:02.202 [2024-11-20 13:49:05.077837] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:02.202 [2024-11-20 13:49:05.077883] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:29:02.202 [2024-11-20 13:49:05.077960] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:02.202 [2024-11-20 13:49:05.077979] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:29:02.202 BaseBdev1 00:29:02.202 13:49:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.202 13:49:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:29:03.578 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:03.578 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:03.578 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:03.578 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:03.578 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:03.578 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:03.578 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:03.578 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:03.578 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:03.578 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:03.578 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:03.578 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.578 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.578 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:03.578 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.578 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:03.578 "name": "raid_bdev1", 00:29:03.578 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:29:03.578 "strip_size_kb": 0, 00:29:03.578 "state": "online", 00:29:03.578 "raid_level": "raid1", 00:29:03.578 "superblock": true, 00:29:03.578 "num_base_bdevs": 4, 00:29:03.578 "num_base_bdevs_discovered": 2, 00:29:03.578 "num_base_bdevs_operational": 2, 00:29:03.578 "base_bdevs_list": [ 00:29:03.578 { 00:29:03.578 "name": null, 00:29:03.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:03.578 "is_configured": false, 00:29:03.578 "data_offset": 0, 00:29:03.578 "data_size": 63488 00:29:03.578 }, 00:29:03.578 { 00:29:03.578 "name": null, 00:29:03.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:03.578 "is_configured": false, 00:29:03.578 "data_offset": 2048, 00:29:03.578 "data_size": 63488 00:29:03.578 }, 00:29:03.578 { 00:29:03.578 "name": "BaseBdev3", 00:29:03.578 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:29:03.578 "is_configured": true, 00:29:03.578 "data_offset": 2048, 00:29:03.578 "data_size": 63488 00:29:03.578 }, 00:29:03.578 { 00:29:03.578 "name": "BaseBdev4", 00:29:03.578 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:29:03.578 "is_configured": true, 00:29:03.578 "data_offset": 2048, 00:29:03.578 "data_size": 63488 00:29:03.578 } 00:29:03.578 ] 00:29:03.578 }' 00:29:03.578 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:03.578 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:03.837 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:03.837 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:03.837 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:03.837 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:03.837 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:03.837 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:03.837 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.837 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.837 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:03.837 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.837 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:03.837 "name": "raid_bdev1", 00:29:03.837 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:29:03.837 "strip_size_kb": 0, 00:29:03.837 "state": "online", 00:29:03.837 "raid_level": "raid1", 00:29:03.837 "superblock": true, 00:29:03.837 "num_base_bdevs": 4, 00:29:03.837 "num_base_bdevs_discovered": 2, 00:29:03.837 "num_base_bdevs_operational": 2, 00:29:03.837 "base_bdevs_list": [ 00:29:03.837 { 00:29:03.837 "name": null, 00:29:03.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:03.837 "is_configured": false, 00:29:03.837 "data_offset": 0, 00:29:03.837 "data_size": 63488 00:29:03.837 }, 00:29:03.837 { 00:29:03.837 "name": null, 00:29:03.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:03.837 "is_configured": false, 00:29:03.837 "data_offset": 2048, 00:29:03.837 "data_size": 63488 00:29:03.837 }, 00:29:03.837 { 00:29:03.837 "name": "BaseBdev3", 00:29:03.837 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:29:03.837 "is_configured": true, 00:29:03.837 "data_offset": 2048, 00:29:03.837 "data_size": 63488 00:29:03.837 }, 00:29:03.837 { 00:29:03.837 "name": "BaseBdev4", 00:29:03.837 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:29:03.837 "is_configured": true, 00:29:03.837 "data_offset": 2048, 00:29:03.837 "data_size": 63488 00:29:03.837 } 00:29:03.837 ] 00:29:03.837 }' 00:29:03.837 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:03.837 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:03.837 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:04.095 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:04.095 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:04.095 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:29:04.095 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:04.095 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:04.095 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.095 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:04.095 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.095 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:04.095 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.095 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:04.095 [2024-11-20 13:49:06.773540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:04.095 [2024-11-20 13:49:06.773767] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:29:04.095 [2024-11-20 13:49:06.773789] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:04.095 request: 00:29:04.095 { 00:29:04.095 "base_bdev": "BaseBdev1", 00:29:04.095 "raid_bdev": "raid_bdev1", 00:29:04.095 "method": "bdev_raid_add_base_bdev", 00:29:04.095 "req_id": 1 00:29:04.095 } 00:29:04.095 Got JSON-RPC error response 00:29:04.095 response: 00:29:04.095 { 00:29:04.095 "code": -22, 00:29:04.095 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:04.095 } 00:29:04.095 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:04.095 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:29:04.095 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:04.095 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:04.095 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:04.095 13:49:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:29:05.029 13:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:05.029 13:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:05.029 13:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:05.029 13:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:05.029 13:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:05.029 13:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:05.029 13:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:05.029 13:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:05.029 13:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:05.029 13:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:05.029 13:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:05.029 13:49:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.029 13:49:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:05.029 13:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:05.029 13:49:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.029 13:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:05.029 "name": "raid_bdev1", 00:29:05.029 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:29:05.029 "strip_size_kb": 0, 00:29:05.029 "state": "online", 00:29:05.029 "raid_level": "raid1", 00:29:05.029 "superblock": true, 00:29:05.029 "num_base_bdevs": 4, 00:29:05.029 "num_base_bdevs_discovered": 2, 00:29:05.029 "num_base_bdevs_operational": 2, 00:29:05.029 "base_bdevs_list": [ 00:29:05.029 { 00:29:05.029 "name": null, 00:29:05.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:05.029 "is_configured": false, 00:29:05.029 "data_offset": 0, 00:29:05.029 "data_size": 63488 00:29:05.029 }, 00:29:05.029 { 00:29:05.029 "name": null, 00:29:05.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:05.029 "is_configured": false, 00:29:05.029 "data_offset": 2048, 00:29:05.029 "data_size": 63488 00:29:05.029 }, 00:29:05.029 { 00:29:05.029 "name": "BaseBdev3", 00:29:05.029 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:29:05.029 "is_configured": true, 00:29:05.029 "data_offset": 2048, 00:29:05.029 "data_size": 63488 00:29:05.029 }, 00:29:05.029 { 00:29:05.029 "name": "BaseBdev4", 00:29:05.029 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:29:05.029 "is_configured": true, 00:29:05.029 "data_offset": 2048, 00:29:05.029 "data_size": 63488 00:29:05.029 } 00:29:05.029 ] 00:29:05.029 }' 00:29:05.030 13:49:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:05.030 13:49:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:05.596 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:05.596 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:05.596 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:05.596 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:05.596 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:05.596 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:05.596 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.596 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:05.596 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:05.596 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.596 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:05.596 "name": "raid_bdev1", 00:29:05.596 "uuid": "c3f9e552-c3aa-4906-91a2-3076378affcc", 00:29:05.596 "strip_size_kb": 0, 00:29:05.596 "state": "online", 00:29:05.596 "raid_level": "raid1", 00:29:05.596 "superblock": true, 00:29:05.596 "num_base_bdevs": 4, 00:29:05.596 "num_base_bdevs_discovered": 2, 00:29:05.596 "num_base_bdevs_operational": 2, 00:29:05.596 "base_bdevs_list": [ 00:29:05.596 { 00:29:05.596 "name": null, 00:29:05.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:05.596 "is_configured": false, 00:29:05.596 "data_offset": 0, 00:29:05.596 "data_size": 63488 00:29:05.596 }, 00:29:05.596 { 00:29:05.596 "name": null, 00:29:05.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:05.596 "is_configured": false, 00:29:05.596 "data_offset": 2048, 00:29:05.596 "data_size": 63488 00:29:05.596 }, 00:29:05.596 { 00:29:05.596 "name": "BaseBdev3", 00:29:05.596 "uuid": "6a4a32d7-4f44-51d0-8787-9588de1a3e40", 00:29:05.596 "is_configured": true, 00:29:05.596 "data_offset": 2048, 00:29:05.596 "data_size": 63488 00:29:05.596 }, 00:29:05.596 { 00:29:05.596 "name": "BaseBdev4", 00:29:05.596 "uuid": "e50b6e10-eefe-5390-ba2d-23f6486f3af2", 00:29:05.596 "is_configured": true, 00:29:05.596 "data_offset": 2048, 00:29:05.596 "data_size": 63488 00:29:05.596 } 00:29:05.596 ] 00:29:05.596 }' 00:29:05.596 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:05.596 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:05.596 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:05.596 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:05.597 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79700 00:29:05.597 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79700 ']' 00:29:05.597 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79700 00:29:05.597 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:29:05.597 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:05.597 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79700 00:29:05.855 killing process with pid 79700 00:29:05.855 Received shutdown signal, test time was about 19.829378 seconds 00:29:05.855 00:29:05.855 Latency(us) 00:29:05.855 [2024-11-20T13:49:08.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.855 [2024-11-20T13:49:08.772Z] =================================================================================================================== 00:29:05.855 [2024-11-20T13:49:08.772Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:05.855 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:05.855 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:05.855 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79700' 00:29:05.855 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79700 00:29:05.855 [2024-11-20 13:49:08.517462] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:05.855 13:49:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79700 00:29:05.855 [2024-11-20 13:49:08.517650] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:05.855 [2024-11-20 13:49:08.517745] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:05.855 [2024-11-20 13:49:08.517768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:29:06.114 [2024-11-20 13:49:08.909381] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:07.493 ************************************ 00:29:07.493 END TEST raid_rebuild_test_sb_io 00:29:07.493 ************************************ 00:29:07.493 13:49:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:29:07.493 00:29:07.493 real 0m23.586s 00:29:07.493 user 0m32.192s 00:29:07.493 sys 0m2.650s 00:29:07.493 13:49:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.493 13:49:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:07.493 13:49:10 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:29:07.493 13:49:10 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:29:07.493 13:49:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:07.493 13:49:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.493 13:49:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:07.493 ************************************ 00:29:07.493 START TEST raid5f_state_function_test 00:29:07.493 ************************************ 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80447 00:29:07.493 Process raid pid: 80447 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80447' 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80447 00:29:07.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80447 ']' 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.493 13:49:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:07.493 [2024-11-20 13:49:10.271306] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:29:07.493 [2024-11-20 13:49:10.271496] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.753 [2024-11-20 13:49:10.465615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.753 [2024-11-20 13:49:10.635770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.012 [2024-11-20 13:49:10.834992] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:08.012 [2024-11-20 13:49:10.835049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:08.580 [2024-11-20 13:49:11.253856] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:08.580 [2024-11-20 13:49:11.253971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:08.580 [2024-11-20 13:49:11.253989] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:08.580 [2024-11-20 13:49:11.254006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:08.580 [2024-11-20 13:49:11.254016] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:08.580 [2024-11-20 13:49:11.254030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:08.580 "name": "Existed_Raid", 00:29:08.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:08.580 "strip_size_kb": 64, 00:29:08.580 "state": "configuring", 00:29:08.580 "raid_level": "raid5f", 00:29:08.580 "superblock": false, 00:29:08.580 "num_base_bdevs": 3, 00:29:08.580 "num_base_bdevs_discovered": 0, 00:29:08.580 "num_base_bdevs_operational": 3, 00:29:08.580 "base_bdevs_list": [ 00:29:08.580 { 00:29:08.580 "name": "BaseBdev1", 00:29:08.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:08.580 "is_configured": false, 00:29:08.580 "data_offset": 0, 00:29:08.580 "data_size": 0 00:29:08.580 }, 00:29:08.580 { 00:29:08.580 "name": "BaseBdev2", 00:29:08.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:08.580 "is_configured": false, 00:29:08.580 "data_offset": 0, 00:29:08.580 "data_size": 0 00:29:08.580 }, 00:29:08.580 { 00:29:08.580 "name": "BaseBdev3", 00:29:08.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:08.580 "is_configured": false, 00:29:08.580 "data_offset": 0, 00:29:08.580 "data_size": 0 00:29:08.580 } 00:29:08.580 ] 00:29:08.580 }' 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:08.580 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:08.839 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:08.839 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.839 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:08.839 [2024-11-20 13:49:11.749989] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:08.839 [2024-11-20 13:49:11.750048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:29:09.099 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.099 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:29:09.099 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.099 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:09.099 [2024-11-20 13:49:11.757976] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:09.099 [2024-11-20 13:49:11.758030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:09.099 [2024-11-20 13:49:11.758045] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:09.099 [2024-11-20 13:49:11.758060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:09.099 [2024-11-20 13:49:11.758069] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:09.099 [2024-11-20 13:49:11.758082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:09.099 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.099 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:29:09.099 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.099 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:09.099 [2024-11-20 13:49:11.807746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:09.099 BaseBdev1 00:29:09.099 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.099 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:29:09.099 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:29:09.099 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:09.099 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:09.099 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:09.099 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:09.099 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:09.099 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.099 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:09.099 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.099 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:09.099 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.099 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:09.099 [ 00:29:09.099 { 00:29:09.099 "name": "BaseBdev1", 00:29:09.099 "aliases": [ 00:29:09.099 "ce00271e-ee84-4206-96b3-c7baac56e1a1" 00:29:09.099 ], 00:29:09.100 "product_name": "Malloc disk", 00:29:09.100 "block_size": 512, 00:29:09.100 "num_blocks": 65536, 00:29:09.100 "uuid": "ce00271e-ee84-4206-96b3-c7baac56e1a1", 00:29:09.100 "assigned_rate_limits": { 00:29:09.100 "rw_ios_per_sec": 0, 00:29:09.100 "rw_mbytes_per_sec": 0, 00:29:09.100 "r_mbytes_per_sec": 0, 00:29:09.100 "w_mbytes_per_sec": 0 00:29:09.100 }, 00:29:09.100 "claimed": true, 00:29:09.100 "claim_type": "exclusive_write", 00:29:09.100 "zoned": false, 00:29:09.100 "supported_io_types": { 00:29:09.100 "read": true, 00:29:09.100 "write": true, 00:29:09.100 "unmap": true, 00:29:09.100 "flush": true, 00:29:09.100 "reset": true, 00:29:09.100 "nvme_admin": false, 00:29:09.100 "nvme_io": false, 00:29:09.100 "nvme_io_md": false, 00:29:09.100 "write_zeroes": true, 00:29:09.100 "zcopy": true, 00:29:09.100 "get_zone_info": false, 00:29:09.100 "zone_management": false, 00:29:09.100 "zone_append": false, 00:29:09.100 "compare": false, 00:29:09.100 "compare_and_write": false, 00:29:09.100 "abort": true, 00:29:09.100 "seek_hole": false, 00:29:09.100 "seek_data": false, 00:29:09.100 "copy": true, 00:29:09.100 "nvme_iov_md": false 00:29:09.100 }, 00:29:09.100 "memory_domains": [ 00:29:09.100 { 00:29:09.100 "dma_device_id": "system", 00:29:09.100 "dma_device_type": 1 00:29:09.100 }, 00:29:09.100 { 00:29:09.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:09.100 "dma_device_type": 2 00:29:09.100 } 00:29:09.100 ], 00:29:09.100 "driver_specific": {} 00:29:09.100 } 00:29:09.100 ] 00:29:09.100 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.100 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:09.100 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:09.100 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:09.100 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:09.100 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:09.100 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:09.100 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:09.100 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:09.100 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:09.100 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:09.100 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:09.100 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:09.100 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.100 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:09.100 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:09.100 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.100 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:09.100 "name": "Existed_Raid", 00:29:09.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:09.100 "strip_size_kb": 64, 00:29:09.100 "state": "configuring", 00:29:09.100 "raid_level": "raid5f", 00:29:09.100 "superblock": false, 00:29:09.100 "num_base_bdevs": 3, 00:29:09.100 "num_base_bdevs_discovered": 1, 00:29:09.100 "num_base_bdevs_operational": 3, 00:29:09.100 "base_bdevs_list": [ 00:29:09.100 { 00:29:09.100 "name": "BaseBdev1", 00:29:09.100 "uuid": "ce00271e-ee84-4206-96b3-c7baac56e1a1", 00:29:09.100 "is_configured": true, 00:29:09.100 "data_offset": 0, 00:29:09.100 "data_size": 65536 00:29:09.100 }, 00:29:09.100 { 00:29:09.100 "name": "BaseBdev2", 00:29:09.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:09.100 "is_configured": false, 00:29:09.100 "data_offset": 0, 00:29:09.100 "data_size": 0 00:29:09.100 }, 00:29:09.100 { 00:29:09.100 "name": "BaseBdev3", 00:29:09.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:09.100 "is_configured": false, 00:29:09.100 "data_offset": 0, 00:29:09.100 "data_size": 0 00:29:09.100 } 00:29:09.100 ] 00:29:09.100 }' 00:29:09.100 13:49:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:09.100 13:49:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:09.669 [2024-11-20 13:49:12.359881] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:09.669 [2024-11-20 13:49:12.360179] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:09.669 [2024-11-20 13:49:12.367921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:09.669 [2024-11-20 13:49:12.370460] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:09.669 [2024-11-20 13:49:12.370509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:09.669 [2024-11-20 13:49:12.370534] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:09.669 [2024-11-20 13:49:12.370548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:09.669 "name": "Existed_Raid", 00:29:09.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:09.669 "strip_size_kb": 64, 00:29:09.669 "state": "configuring", 00:29:09.669 "raid_level": "raid5f", 00:29:09.669 "superblock": false, 00:29:09.669 "num_base_bdevs": 3, 00:29:09.669 "num_base_bdevs_discovered": 1, 00:29:09.669 "num_base_bdevs_operational": 3, 00:29:09.669 "base_bdevs_list": [ 00:29:09.669 { 00:29:09.669 "name": "BaseBdev1", 00:29:09.669 "uuid": "ce00271e-ee84-4206-96b3-c7baac56e1a1", 00:29:09.669 "is_configured": true, 00:29:09.669 "data_offset": 0, 00:29:09.669 "data_size": 65536 00:29:09.669 }, 00:29:09.669 { 00:29:09.669 "name": "BaseBdev2", 00:29:09.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:09.669 "is_configured": false, 00:29:09.669 "data_offset": 0, 00:29:09.669 "data_size": 0 00:29:09.669 }, 00:29:09.669 { 00:29:09.669 "name": "BaseBdev3", 00:29:09.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:09.669 "is_configured": false, 00:29:09.669 "data_offset": 0, 00:29:09.669 "data_size": 0 00:29:09.669 } 00:29:09.669 ] 00:29:09.669 }' 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:09.669 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:10.237 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:29:10.237 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.237 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:10.237 [2024-11-20 13:49:12.949886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:10.237 BaseBdev2 00:29:10.237 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.237 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:29:10.237 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:29:10.237 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:10.237 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:10.237 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:10.237 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:10.237 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:10.237 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.237 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:10.237 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.237 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:10.237 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.237 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:10.237 [ 00:29:10.237 { 00:29:10.237 "name": "BaseBdev2", 00:29:10.237 "aliases": [ 00:29:10.237 "977f9850-7868-4f53-a436-740a33333065" 00:29:10.237 ], 00:29:10.237 "product_name": "Malloc disk", 00:29:10.237 "block_size": 512, 00:29:10.237 "num_blocks": 65536, 00:29:10.237 "uuid": "977f9850-7868-4f53-a436-740a33333065", 00:29:10.237 "assigned_rate_limits": { 00:29:10.237 "rw_ios_per_sec": 0, 00:29:10.237 "rw_mbytes_per_sec": 0, 00:29:10.237 "r_mbytes_per_sec": 0, 00:29:10.237 "w_mbytes_per_sec": 0 00:29:10.237 }, 00:29:10.237 "claimed": true, 00:29:10.237 "claim_type": "exclusive_write", 00:29:10.237 "zoned": false, 00:29:10.237 "supported_io_types": { 00:29:10.237 "read": true, 00:29:10.237 "write": true, 00:29:10.237 "unmap": true, 00:29:10.237 "flush": true, 00:29:10.237 "reset": true, 00:29:10.237 "nvme_admin": false, 00:29:10.237 "nvme_io": false, 00:29:10.237 "nvme_io_md": false, 00:29:10.238 "write_zeroes": true, 00:29:10.238 "zcopy": true, 00:29:10.238 "get_zone_info": false, 00:29:10.238 "zone_management": false, 00:29:10.238 "zone_append": false, 00:29:10.238 "compare": false, 00:29:10.238 "compare_and_write": false, 00:29:10.238 "abort": true, 00:29:10.238 "seek_hole": false, 00:29:10.238 "seek_data": false, 00:29:10.238 "copy": true, 00:29:10.238 "nvme_iov_md": false 00:29:10.238 }, 00:29:10.238 "memory_domains": [ 00:29:10.238 { 00:29:10.238 "dma_device_id": "system", 00:29:10.238 "dma_device_type": 1 00:29:10.238 }, 00:29:10.238 { 00:29:10.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:10.238 "dma_device_type": 2 00:29:10.238 } 00:29:10.238 ], 00:29:10.238 "driver_specific": {} 00:29:10.238 } 00:29:10.238 ] 00:29:10.238 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.238 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:10.238 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:10.238 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:10.238 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:10.238 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:10.238 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:10.238 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:10.238 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:10.238 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:10.238 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:10.238 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:10.238 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:10.238 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:10.238 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:10.238 13:49:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:10.238 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.238 13:49:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:10.238 13:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.238 13:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:10.238 "name": "Existed_Raid", 00:29:10.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:10.238 "strip_size_kb": 64, 00:29:10.238 "state": "configuring", 00:29:10.238 "raid_level": "raid5f", 00:29:10.238 "superblock": false, 00:29:10.238 "num_base_bdevs": 3, 00:29:10.238 "num_base_bdevs_discovered": 2, 00:29:10.238 "num_base_bdevs_operational": 3, 00:29:10.238 "base_bdevs_list": [ 00:29:10.238 { 00:29:10.238 "name": "BaseBdev1", 00:29:10.238 "uuid": "ce00271e-ee84-4206-96b3-c7baac56e1a1", 00:29:10.238 "is_configured": true, 00:29:10.238 "data_offset": 0, 00:29:10.238 "data_size": 65536 00:29:10.238 }, 00:29:10.238 { 00:29:10.238 "name": "BaseBdev2", 00:29:10.238 "uuid": "977f9850-7868-4f53-a436-740a33333065", 00:29:10.238 "is_configured": true, 00:29:10.238 "data_offset": 0, 00:29:10.238 "data_size": 65536 00:29:10.238 }, 00:29:10.238 { 00:29:10.238 "name": "BaseBdev3", 00:29:10.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:10.238 "is_configured": false, 00:29:10.238 "data_offset": 0, 00:29:10.238 "data_size": 0 00:29:10.238 } 00:29:10.238 ] 00:29:10.238 }' 00:29:10.238 13:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:10.238 13:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:10.806 13:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:29:10.806 13:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.806 13:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:10.806 [2024-11-20 13:49:13.588288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:10.806 [2024-11-20 13:49:13.588593] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:10.807 [2024-11-20 13:49:13.588630] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:29:10.807 [2024-11-20 13:49:13.589057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:10.807 [2024-11-20 13:49:13.594729] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:10.807 [2024-11-20 13:49:13.594758] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:29:10.807 [2024-11-20 13:49:13.595146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:10.807 BaseBdev3 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:10.807 [ 00:29:10.807 { 00:29:10.807 "name": "BaseBdev3", 00:29:10.807 "aliases": [ 00:29:10.807 "edaab175-5dae-4a90-93bf-ed1c5119898d" 00:29:10.807 ], 00:29:10.807 "product_name": "Malloc disk", 00:29:10.807 "block_size": 512, 00:29:10.807 "num_blocks": 65536, 00:29:10.807 "uuid": "edaab175-5dae-4a90-93bf-ed1c5119898d", 00:29:10.807 "assigned_rate_limits": { 00:29:10.807 "rw_ios_per_sec": 0, 00:29:10.807 "rw_mbytes_per_sec": 0, 00:29:10.807 "r_mbytes_per_sec": 0, 00:29:10.807 "w_mbytes_per_sec": 0 00:29:10.807 }, 00:29:10.807 "claimed": true, 00:29:10.807 "claim_type": "exclusive_write", 00:29:10.807 "zoned": false, 00:29:10.807 "supported_io_types": { 00:29:10.807 "read": true, 00:29:10.807 "write": true, 00:29:10.807 "unmap": true, 00:29:10.807 "flush": true, 00:29:10.807 "reset": true, 00:29:10.807 "nvme_admin": false, 00:29:10.807 "nvme_io": false, 00:29:10.807 "nvme_io_md": false, 00:29:10.807 "write_zeroes": true, 00:29:10.807 "zcopy": true, 00:29:10.807 "get_zone_info": false, 00:29:10.807 "zone_management": false, 00:29:10.807 "zone_append": false, 00:29:10.807 "compare": false, 00:29:10.807 "compare_and_write": false, 00:29:10.807 "abort": true, 00:29:10.807 "seek_hole": false, 00:29:10.807 "seek_data": false, 00:29:10.807 "copy": true, 00:29:10.807 "nvme_iov_md": false 00:29:10.807 }, 00:29:10.807 "memory_domains": [ 00:29:10.807 { 00:29:10.807 "dma_device_id": "system", 00:29:10.807 "dma_device_type": 1 00:29:10.807 }, 00:29:10.807 { 00:29:10.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:10.807 "dma_device_type": 2 00:29:10.807 } 00:29:10.807 ], 00:29:10.807 "driver_specific": {} 00:29:10.807 } 00:29:10.807 ] 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:10.807 "name": "Existed_Raid", 00:29:10.807 "uuid": "01cdd754-d4a1-4bf3-8589-85928fdad6ad", 00:29:10.807 "strip_size_kb": 64, 00:29:10.807 "state": "online", 00:29:10.807 "raid_level": "raid5f", 00:29:10.807 "superblock": false, 00:29:10.807 "num_base_bdevs": 3, 00:29:10.807 "num_base_bdevs_discovered": 3, 00:29:10.807 "num_base_bdevs_operational": 3, 00:29:10.807 "base_bdevs_list": [ 00:29:10.807 { 00:29:10.807 "name": "BaseBdev1", 00:29:10.807 "uuid": "ce00271e-ee84-4206-96b3-c7baac56e1a1", 00:29:10.807 "is_configured": true, 00:29:10.807 "data_offset": 0, 00:29:10.807 "data_size": 65536 00:29:10.807 }, 00:29:10.807 { 00:29:10.807 "name": "BaseBdev2", 00:29:10.807 "uuid": "977f9850-7868-4f53-a436-740a33333065", 00:29:10.807 "is_configured": true, 00:29:10.807 "data_offset": 0, 00:29:10.807 "data_size": 65536 00:29:10.807 }, 00:29:10.807 { 00:29:10.807 "name": "BaseBdev3", 00:29:10.807 "uuid": "edaab175-5dae-4a90-93bf-ed1c5119898d", 00:29:10.807 "is_configured": true, 00:29:10.807 "data_offset": 0, 00:29:10.807 "data_size": 65536 00:29:10.807 } 00:29:10.807 ] 00:29:10.807 }' 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:10.807 13:49:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.376 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:29:11.376 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:11.376 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:11.377 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:11.377 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:11.377 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:11.377 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:11.377 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:11.377 13:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.377 13:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.377 [2024-11-20 13:49:14.170169] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:11.377 13:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.377 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:11.377 "name": "Existed_Raid", 00:29:11.377 "aliases": [ 00:29:11.377 "01cdd754-d4a1-4bf3-8589-85928fdad6ad" 00:29:11.377 ], 00:29:11.377 "product_name": "Raid Volume", 00:29:11.377 "block_size": 512, 00:29:11.377 "num_blocks": 131072, 00:29:11.377 "uuid": "01cdd754-d4a1-4bf3-8589-85928fdad6ad", 00:29:11.377 "assigned_rate_limits": { 00:29:11.377 "rw_ios_per_sec": 0, 00:29:11.377 "rw_mbytes_per_sec": 0, 00:29:11.377 "r_mbytes_per_sec": 0, 00:29:11.377 "w_mbytes_per_sec": 0 00:29:11.377 }, 00:29:11.377 "claimed": false, 00:29:11.377 "zoned": false, 00:29:11.377 "supported_io_types": { 00:29:11.377 "read": true, 00:29:11.377 "write": true, 00:29:11.377 "unmap": false, 00:29:11.377 "flush": false, 00:29:11.377 "reset": true, 00:29:11.377 "nvme_admin": false, 00:29:11.377 "nvme_io": false, 00:29:11.377 "nvme_io_md": false, 00:29:11.377 "write_zeroes": true, 00:29:11.377 "zcopy": false, 00:29:11.377 "get_zone_info": false, 00:29:11.377 "zone_management": false, 00:29:11.377 "zone_append": false, 00:29:11.377 "compare": false, 00:29:11.377 "compare_and_write": false, 00:29:11.377 "abort": false, 00:29:11.377 "seek_hole": false, 00:29:11.377 "seek_data": false, 00:29:11.377 "copy": false, 00:29:11.377 "nvme_iov_md": false 00:29:11.377 }, 00:29:11.377 "driver_specific": { 00:29:11.377 "raid": { 00:29:11.377 "uuid": "01cdd754-d4a1-4bf3-8589-85928fdad6ad", 00:29:11.377 "strip_size_kb": 64, 00:29:11.377 "state": "online", 00:29:11.377 "raid_level": "raid5f", 00:29:11.377 "superblock": false, 00:29:11.377 "num_base_bdevs": 3, 00:29:11.377 "num_base_bdevs_discovered": 3, 00:29:11.377 "num_base_bdevs_operational": 3, 00:29:11.377 "base_bdevs_list": [ 00:29:11.377 { 00:29:11.377 "name": "BaseBdev1", 00:29:11.377 "uuid": "ce00271e-ee84-4206-96b3-c7baac56e1a1", 00:29:11.377 "is_configured": true, 00:29:11.377 "data_offset": 0, 00:29:11.377 "data_size": 65536 00:29:11.377 }, 00:29:11.377 { 00:29:11.377 "name": "BaseBdev2", 00:29:11.377 "uuid": "977f9850-7868-4f53-a436-740a33333065", 00:29:11.377 "is_configured": true, 00:29:11.377 "data_offset": 0, 00:29:11.377 "data_size": 65536 00:29:11.377 }, 00:29:11.377 { 00:29:11.377 "name": "BaseBdev3", 00:29:11.377 "uuid": "edaab175-5dae-4a90-93bf-ed1c5119898d", 00:29:11.377 "is_configured": true, 00:29:11.377 "data_offset": 0, 00:29:11.377 "data_size": 65536 00:29:11.377 } 00:29:11.377 ] 00:29:11.377 } 00:29:11.377 } 00:29:11.377 }' 00:29:11.377 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:29:11.636 BaseBdev2 00:29:11.636 BaseBdev3' 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.636 13:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.636 [2024-11-20 13:49:14.518141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:11.896 "name": "Existed_Raid", 00:29:11.896 "uuid": "01cdd754-d4a1-4bf3-8589-85928fdad6ad", 00:29:11.896 "strip_size_kb": 64, 00:29:11.896 "state": "online", 00:29:11.896 "raid_level": "raid5f", 00:29:11.896 "superblock": false, 00:29:11.896 "num_base_bdevs": 3, 00:29:11.896 "num_base_bdevs_discovered": 2, 00:29:11.896 "num_base_bdevs_operational": 2, 00:29:11.896 "base_bdevs_list": [ 00:29:11.896 { 00:29:11.896 "name": null, 00:29:11.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:11.896 "is_configured": false, 00:29:11.896 "data_offset": 0, 00:29:11.896 "data_size": 65536 00:29:11.896 }, 00:29:11.896 { 00:29:11.896 "name": "BaseBdev2", 00:29:11.896 "uuid": "977f9850-7868-4f53-a436-740a33333065", 00:29:11.896 "is_configured": true, 00:29:11.896 "data_offset": 0, 00:29:11.896 "data_size": 65536 00:29:11.896 }, 00:29:11.896 { 00:29:11.896 "name": "BaseBdev3", 00:29:11.896 "uuid": "edaab175-5dae-4a90-93bf-ed1c5119898d", 00:29:11.896 "is_configured": true, 00:29:11.896 "data_offset": 0, 00:29:11.896 "data_size": 65536 00:29:11.896 } 00:29:11.896 ] 00:29:11.896 }' 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:11.896 13:49:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.472 [2024-11-20 13:49:15.212737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:12.472 [2024-11-20 13:49:15.212918] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:12.472 [2024-11-20 13:49:15.311272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.472 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.472 [2024-11-20 13:49:15.367303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:29:12.472 [2024-11-20 13:49:15.367378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.746 BaseBdev2 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.746 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.746 [ 00:29:12.746 { 00:29:12.746 "name": "BaseBdev2", 00:29:12.746 "aliases": [ 00:29:12.746 "3618d1e9-72c3-40a4-850d-dfd79b2da5d6" 00:29:12.746 ], 00:29:12.747 "product_name": "Malloc disk", 00:29:12.747 "block_size": 512, 00:29:12.747 "num_blocks": 65536, 00:29:12.747 "uuid": "3618d1e9-72c3-40a4-850d-dfd79b2da5d6", 00:29:12.747 "assigned_rate_limits": { 00:29:12.747 "rw_ios_per_sec": 0, 00:29:12.747 "rw_mbytes_per_sec": 0, 00:29:12.747 "r_mbytes_per_sec": 0, 00:29:12.747 "w_mbytes_per_sec": 0 00:29:12.747 }, 00:29:12.747 "claimed": false, 00:29:12.747 "zoned": false, 00:29:12.747 "supported_io_types": { 00:29:12.747 "read": true, 00:29:12.747 "write": true, 00:29:12.747 "unmap": true, 00:29:12.747 "flush": true, 00:29:12.747 "reset": true, 00:29:12.747 "nvme_admin": false, 00:29:12.747 "nvme_io": false, 00:29:12.747 "nvme_io_md": false, 00:29:12.747 "write_zeroes": true, 00:29:12.747 "zcopy": true, 00:29:12.747 "get_zone_info": false, 00:29:12.747 "zone_management": false, 00:29:12.747 "zone_append": false, 00:29:12.747 "compare": false, 00:29:12.747 "compare_and_write": false, 00:29:12.747 "abort": true, 00:29:12.747 "seek_hole": false, 00:29:12.747 "seek_data": false, 00:29:12.747 "copy": true, 00:29:12.747 "nvme_iov_md": false 00:29:12.747 }, 00:29:12.747 "memory_domains": [ 00:29:12.747 { 00:29:12.747 "dma_device_id": "system", 00:29:12.747 "dma_device_type": 1 00:29:12.747 }, 00:29:12.747 { 00:29:12.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:12.747 "dma_device_type": 2 00:29:12.747 } 00:29:12.747 ], 00:29:12.747 "driver_specific": {} 00:29:12.747 } 00:29:12.747 ] 00:29:12.747 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.747 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:12.747 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:29:12.747 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:12.747 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:29:12.747 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.747 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.747 BaseBdev3 00:29:12.747 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.747 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:29:12.747 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:29:12.747 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:12.747 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:12.747 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:12.747 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:12.747 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:12.747 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.747 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.747 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.747 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:29:12.747 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.747 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.006 [ 00:29:13.006 { 00:29:13.006 "name": "BaseBdev3", 00:29:13.006 "aliases": [ 00:29:13.006 "280964ac-9bf2-499d-b069-ca11a10b429c" 00:29:13.006 ], 00:29:13.006 "product_name": "Malloc disk", 00:29:13.006 "block_size": 512, 00:29:13.006 "num_blocks": 65536, 00:29:13.006 "uuid": "280964ac-9bf2-499d-b069-ca11a10b429c", 00:29:13.006 "assigned_rate_limits": { 00:29:13.006 "rw_ios_per_sec": 0, 00:29:13.006 "rw_mbytes_per_sec": 0, 00:29:13.006 "r_mbytes_per_sec": 0, 00:29:13.006 "w_mbytes_per_sec": 0 00:29:13.006 }, 00:29:13.006 "claimed": false, 00:29:13.006 "zoned": false, 00:29:13.006 "supported_io_types": { 00:29:13.006 "read": true, 00:29:13.006 "write": true, 00:29:13.006 "unmap": true, 00:29:13.006 "flush": true, 00:29:13.006 "reset": true, 00:29:13.006 "nvme_admin": false, 00:29:13.006 "nvme_io": false, 00:29:13.006 "nvme_io_md": false, 00:29:13.006 "write_zeroes": true, 00:29:13.006 "zcopy": true, 00:29:13.006 "get_zone_info": false, 00:29:13.006 "zone_management": false, 00:29:13.006 "zone_append": false, 00:29:13.006 "compare": false, 00:29:13.006 "compare_and_write": false, 00:29:13.006 "abort": true, 00:29:13.006 "seek_hole": false, 00:29:13.006 "seek_data": false, 00:29:13.006 "copy": true, 00:29:13.006 "nvme_iov_md": false 00:29:13.006 }, 00:29:13.006 "memory_domains": [ 00:29:13.006 { 00:29:13.006 "dma_device_id": "system", 00:29:13.006 "dma_device_type": 1 00:29:13.006 }, 00:29:13.006 { 00:29:13.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:13.006 "dma_device_type": 2 00:29:13.006 } 00:29:13.006 ], 00:29:13.006 "driver_specific": {} 00:29:13.006 } 00:29:13.006 ] 00:29:13.006 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.006 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:13.006 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:29:13.006 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:13.006 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:29:13.006 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.006 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.006 [2024-11-20 13:49:15.688089] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:13.006 [2024-11-20 13:49:15.688325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:13.006 [2024-11-20 13:49:15.688388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:13.006 [2024-11-20 13:49:15.691084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:13.006 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.006 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:13.006 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:13.006 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:13.006 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:13.006 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:13.006 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:13.006 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:13.006 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:13.006 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:13.006 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:13.006 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:13.006 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.006 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.007 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:13.007 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.007 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:13.007 "name": "Existed_Raid", 00:29:13.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:13.007 "strip_size_kb": 64, 00:29:13.007 "state": "configuring", 00:29:13.007 "raid_level": "raid5f", 00:29:13.007 "superblock": false, 00:29:13.007 "num_base_bdevs": 3, 00:29:13.007 "num_base_bdevs_discovered": 2, 00:29:13.007 "num_base_bdevs_operational": 3, 00:29:13.007 "base_bdevs_list": [ 00:29:13.007 { 00:29:13.007 "name": "BaseBdev1", 00:29:13.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:13.007 "is_configured": false, 00:29:13.007 "data_offset": 0, 00:29:13.007 "data_size": 0 00:29:13.007 }, 00:29:13.007 { 00:29:13.007 "name": "BaseBdev2", 00:29:13.007 "uuid": "3618d1e9-72c3-40a4-850d-dfd79b2da5d6", 00:29:13.007 "is_configured": true, 00:29:13.007 "data_offset": 0, 00:29:13.007 "data_size": 65536 00:29:13.007 }, 00:29:13.007 { 00:29:13.007 "name": "BaseBdev3", 00:29:13.007 "uuid": "280964ac-9bf2-499d-b069-ca11a10b429c", 00:29:13.007 "is_configured": true, 00:29:13.007 "data_offset": 0, 00:29:13.007 "data_size": 65536 00:29:13.007 } 00:29:13.007 ] 00:29:13.007 }' 00:29:13.007 13:49:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:13.007 13:49:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.575 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:29:13.575 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.575 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.575 [2024-11-20 13:49:16.240343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:13.575 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.575 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:13.575 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:13.575 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:13.575 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:13.575 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:13.575 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:13.575 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:13.575 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:13.575 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:13.575 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:13.575 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:13.575 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:13.575 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.575 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.575 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.575 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:13.575 "name": "Existed_Raid", 00:29:13.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:13.575 "strip_size_kb": 64, 00:29:13.575 "state": "configuring", 00:29:13.575 "raid_level": "raid5f", 00:29:13.575 "superblock": false, 00:29:13.576 "num_base_bdevs": 3, 00:29:13.576 "num_base_bdevs_discovered": 1, 00:29:13.576 "num_base_bdevs_operational": 3, 00:29:13.576 "base_bdevs_list": [ 00:29:13.576 { 00:29:13.576 "name": "BaseBdev1", 00:29:13.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:13.576 "is_configured": false, 00:29:13.576 "data_offset": 0, 00:29:13.576 "data_size": 0 00:29:13.576 }, 00:29:13.576 { 00:29:13.576 "name": null, 00:29:13.576 "uuid": "3618d1e9-72c3-40a4-850d-dfd79b2da5d6", 00:29:13.576 "is_configured": false, 00:29:13.576 "data_offset": 0, 00:29:13.576 "data_size": 65536 00:29:13.576 }, 00:29:13.576 { 00:29:13.576 "name": "BaseBdev3", 00:29:13.576 "uuid": "280964ac-9bf2-499d-b069-ca11a10b429c", 00:29:13.576 "is_configured": true, 00:29:13.576 "data_offset": 0, 00:29:13.576 "data_size": 65536 00:29:13.576 } 00:29:13.576 ] 00:29:13.576 }' 00:29:13.576 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:13.576 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.145 [2024-11-20 13:49:16.876806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:14.145 BaseBdev1 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.145 [ 00:29:14.145 { 00:29:14.145 "name": "BaseBdev1", 00:29:14.145 "aliases": [ 00:29:14.145 "f2634a9c-11b6-47b4-b813-59a98c94c215" 00:29:14.145 ], 00:29:14.145 "product_name": "Malloc disk", 00:29:14.145 "block_size": 512, 00:29:14.145 "num_blocks": 65536, 00:29:14.145 "uuid": "f2634a9c-11b6-47b4-b813-59a98c94c215", 00:29:14.145 "assigned_rate_limits": { 00:29:14.145 "rw_ios_per_sec": 0, 00:29:14.145 "rw_mbytes_per_sec": 0, 00:29:14.145 "r_mbytes_per_sec": 0, 00:29:14.145 "w_mbytes_per_sec": 0 00:29:14.145 }, 00:29:14.145 "claimed": true, 00:29:14.145 "claim_type": "exclusive_write", 00:29:14.145 "zoned": false, 00:29:14.145 "supported_io_types": { 00:29:14.145 "read": true, 00:29:14.145 "write": true, 00:29:14.145 "unmap": true, 00:29:14.145 "flush": true, 00:29:14.145 "reset": true, 00:29:14.145 "nvme_admin": false, 00:29:14.145 "nvme_io": false, 00:29:14.145 "nvme_io_md": false, 00:29:14.145 "write_zeroes": true, 00:29:14.145 "zcopy": true, 00:29:14.145 "get_zone_info": false, 00:29:14.145 "zone_management": false, 00:29:14.145 "zone_append": false, 00:29:14.145 "compare": false, 00:29:14.145 "compare_and_write": false, 00:29:14.145 "abort": true, 00:29:14.145 "seek_hole": false, 00:29:14.145 "seek_data": false, 00:29:14.145 "copy": true, 00:29:14.145 "nvme_iov_md": false 00:29:14.145 }, 00:29:14.145 "memory_domains": [ 00:29:14.145 { 00:29:14.145 "dma_device_id": "system", 00:29:14.145 "dma_device_type": 1 00:29:14.145 }, 00:29:14.145 { 00:29:14.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:14.145 "dma_device_type": 2 00:29:14.145 } 00:29:14.145 ], 00:29:14.145 "driver_specific": {} 00:29:14.145 } 00:29:14.145 ] 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.145 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:14.145 "name": "Existed_Raid", 00:29:14.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:14.145 "strip_size_kb": 64, 00:29:14.145 "state": "configuring", 00:29:14.145 "raid_level": "raid5f", 00:29:14.145 "superblock": false, 00:29:14.145 "num_base_bdevs": 3, 00:29:14.145 "num_base_bdevs_discovered": 2, 00:29:14.145 "num_base_bdevs_operational": 3, 00:29:14.145 "base_bdevs_list": [ 00:29:14.145 { 00:29:14.145 "name": "BaseBdev1", 00:29:14.145 "uuid": "f2634a9c-11b6-47b4-b813-59a98c94c215", 00:29:14.145 "is_configured": true, 00:29:14.145 "data_offset": 0, 00:29:14.145 "data_size": 65536 00:29:14.145 }, 00:29:14.145 { 00:29:14.145 "name": null, 00:29:14.146 "uuid": "3618d1e9-72c3-40a4-850d-dfd79b2da5d6", 00:29:14.146 "is_configured": false, 00:29:14.146 "data_offset": 0, 00:29:14.146 "data_size": 65536 00:29:14.146 }, 00:29:14.146 { 00:29:14.146 "name": "BaseBdev3", 00:29:14.146 "uuid": "280964ac-9bf2-499d-b069-ca11a10b429c", 00:29:14.146 "is_configured": true, 00:29:14.146 "data_offset": 0, 00:29:14.146 "data_size": 65536 00:29:14.146 } 00:29:14.146 ] 00:29:14.146 }' 00:29:14.146 13:49:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:14.146 13:49:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.714 [2024-11-20 13:49:17.485056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:14.714 "name": "Existed_Raid", 00:29:14.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:14.714 "strip_size_kb": 64, 00:29:14.714 "state": "configuring", 00:29:14.714 "raid_level": "raid5f", 00:29:14.714 "superblock": false, 00:29:14.714 "num_base_bdevs": 3, 00:29:14.714 "num_base_bdevs_discovered": 1, 00:29:14.714 "num_base_bdevs_operational": 3, 00:29:14.714 "base_bdevs_list": [ 00:29:14.714 { 00:29:14.714 "name": "BaseBdev1", 00:29:14.714 "uuid": "f2634a9c-11b6-47b4-b813-59a98c94c215", 00:29:14.714 "is_configured": true, 00:29:14.714 "data_offset": 0, 00:29:14.714 "data_size": 65536 00:29:14.714 }, 00:29:14.714 { 00:29:14.714 "name": null, 00:29:14.714 "uuid": "3618d1e9-72c3-40a4-850d-dfd79b2da5d6", 00:29:14.714 "is_configured": false, 00:29:14.714 "data_offset": 0, 00:29:14.714 "data_size": 65536 00:29:14.714 }, 00:29:14.714 { 00:29:14.714 "name": null, 00:29:14.714 "uuid": "280964ac-9bf2-499d-b069-ca11a10b429c", 00:29:14.714 "is_configured": false, 00:29:14.714 "data_offset": 0, 00:29:14.714 "data_size": 65536 00:29:14.714 } 00:29:14.714 ] 00:29:14.714 }' 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:14.714 13:49:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:29:15.282 13:49:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.282 [2024-11-20 13:49:18.053270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.282 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:15.282 "name": "Existed_Raid", 00:29:15.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:15.282 "strip_size_kb": 64, 00:29:15.282 "state": "configuring", 00:29:15.282 "raid_level": "raid5f", 00:29:15.282 "superblock": false, 00:29:15.282 "num_base_bdevs": 3, 00:29:15.283 "num_base_bdevs_discovered": 2, 00:29:15.283 "num_base_bdevs_operational": 3, 00:29:15.283 "base_bdevs_list": [ 00:29:15.283 { 00:29:15.283 "name": "BaseBdev1", 00:29:15.283 "uuid": "f2634a9c-11b6-47b4-b813-59a98c94c215", 00:29:15.283 "is_configured": true, 00:29:15.283 "data_offset": 0, 00:29:15.283 "data_size": 65536 00:29:15.283 }, 00:29:15.283 { 00:29:15.283 "name": null, 00:29:15.283 "uuid": "3618d1e9-72c3-40a4-850d-dfd79b2da5d6", 00:29:15.283 "is_configured": false, 00:29:15.283 "data_offset": 0, 00:29:15.283 "data_size": 65536 00:29:15.283 }, 00:29:15.283 { 00:29:15.283 "name": "BaseBdev3", 00:29:15.283 "uuid": "280964ac-9bf2-499d-b069-ca11a10b429c", 00:29:15.283 "is_configured": true, 00:29:15.283 "data_offset": 0, 00:29:15.283 "data_size": 65536 00:29:15.283 } 00:29:15.283 ] 00:29:15.283 }' 00:29:15.283 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:15.283 13:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.850 [2024-11-20 13:49:18.629484] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:15.850 13:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.110 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:16.110 "name": "Existed_Raid", 00:29:16.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:16.110 "strip_size_kb": 64, 00:29:16.110 "state": "configuring", 00:29:16.110 "raid_level": "raid5f", 00:29:16.110 "superblock": false, 00:29:16.110 "num_base_bdevs": 3, 00:29:16.110 "num_base_bdevs_discovered": 1, 00:29:16.110 "num_base_bdevs_operational": 3, 00:29:16.110 "base_bdevs_list": [ 00:29:16.110 { 00:29:16.110 "name": null, 00:29:16.110 "uuid": "f2634a9c-11b6-47b4-b813-59a98c94c215", 00:29:16.110 "is_configured": false, 00:29:16.110 "data_offset": 0, 00:29:16.110 "data_size": 65536 00:29:16.110 }, 00:29:16.110 { 00:29:16.110 "name": null, 00:29:16.110 "uuid": "3618d1e9-72c3-40a4-850d-dfd79b2da5d6", 00:29:16.110 "is_configured": false, 00:29:16.110 "data_offset": 0, 00:29:16.110 "data_size": 65536 00:29:16.110 }, 00:29:16.110 { 00:29:16.110 "name": "BaseBdev3", 00:29:16.110 "uuid": "280964ac-9bf2-499d-b069-ca11a10b429c", 00:29:16.110 "is_configured": true, 00:29:16.110 "data_offset": 0, 00:29:16.110 "data_size": 65536 00:29:16.110 } 00:29:16.110 ] 00:29:16.110 }' 00:29:16.110 13:49:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:16.110 13:49:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:16.369 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:16.369 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:29:16.369 13:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.369 13:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:16.628 [2024-11-20 13:49:19.321444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:16.628 "name": "Existed_Raid", 00:29:16.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:16.628 "strip_size_kb": 64, 00:29:16.628 "state": "configuring", 00:29:16.628 "raid_level": "raid5f", 00:29:16.628 "superblock": false, 00:29:16.628 "num_base_bdevs": 3, 00:29:16.628 "num_base_bdevs_discovered": 2, 00:29:16.628 "num_base_bdevs_operational": 3, 00:29:16.628 "base_bdevs_list": [ 00:29:16.628 { 00:29:16.628 "name": null, 00:29:16.628 "uuid": "f2634a9c-11b6-47b4-b813-59a98c94c215", 00:29:16.628 "is_configured": false, 00:29:16.628 "data_offset": 0, 00:29:16.628 "data_size": 65536 00:29:16.628 }, 00:29:16.628 { 00:29:16.628 "name": "BaseBdev2", 00:29:16.628 "uuid": "3618d1e9-72c3-40a4-850d-dfd79b2da5d6", 00:29:16.628 "is_configured": true, 00:29:16.628 "data_offset": 0, 00:29:16.628 "data_size": 65536 00:29:16.628 }, 00:29:16.628 { 00:29:16.628 "name": "BaseBdev3", 00:29:16.628 "uuid": "280964ac-9bf2-499d-b069-ca11a10b429c", 00:29:16.628 "is_configured": true, 00:29:16.628 "data_offset": 0, 00:29:16.628 "data_size": 65536 00:29:16.628 } 00:29:16.628 ] 00:29:16.628 }' 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:16.628 13:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.194 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:17.194 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:29:17.194 13:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.194 13:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.194 13:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.194 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:29:17.194 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:17.194 13:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.194 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:29:17.194 13:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.194 13:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.194 13:49:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f2634a9c-11b6-47b4-b813-59a98c94c215 00:29:17.194 13:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.194 13:49:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.194 [2024-11-20 13:49:20.022297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:29:17.194 [2024-11-20 13:49:20.022393] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:29:17.194 [2024-11-20 13:49:20.022411] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:29:17.194 [2024-11-20 13:49:20.022790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:29:17.194 [2024-11-20 13:49:20.028329] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:29:17.194 [2024-11-20 13:49:20.028356] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:29:17.194 [2024-11-20 13:49:20.028782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:17.194 NewBaseBdev 00:29:17.194 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.194 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:29:17.194 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:29:17.194 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:17.194 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:17.194 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:17.194 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:17.194 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:17.195 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.195 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.195 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.195 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:29:17.195 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.195 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.195 [ 00:29:17.195 { 00:29:17.195 "name": "NewBaseBdev", 00:29:17.195 "aliases": [ 00:29:17.195 "f2634a9c-11b6-47b4-b813-59a98c94c215" 00:29:17.195 ], 00:29:17.195 "product_name": "Malloc disk", 00:29:17.195 "block_size": 512, 00:29:17.195 "num_blocks": 65536, 00:29:17.195 "uuid": "f2634a9c-11b6-47b4-b813-59a98c94c215", 00:29:17.195 "assigned_rate_limits": { 00:29:17.195 "rw_ios_per_sec": 0, 00:29:17.195 "rw_mbytes_per_sec": 0, 00:29:17.195 "r_mbytes_per_sec": 0, 00:29:17.195 "w_mbytes_per_sec": 0 00:29:17.195 }, 00:29:17.195 "claimed": true, 00:29:17.195 "claim_type": "exclusive_write", 00:29:17.195 "zoned": false, 00:29:17.195 "supported_io_types": { 00:29:17.195 "read": true, 00:29:17.195 "write": true, 00:29:17.195 "unmap": true, 00:29:17.195 "flush": true, 00:29:17.195 "reset": true, 00:29:17.195 "nvme_admin": false, 00:29:17.195 "nvme_io": false, 00:29:17.195 "nvme_io_md": false, 00:29:17.195 "write_zeroes": true, 00:29:17.195 "zcopy": true, 00:29:17.195 "get_zone_info": false, 00:29:17.195 "zone_management": false, 00:29:17.195 "zone_append": false, 00:29:17.195 "compare": false, 00:29:17.195 "compare_and_write": false, 00:29:17.195 "abort": true, 00:29:17.195 "seek_hole": false, 00:29:17.195 "seek_data": false, 00:29:17.195 "copy": true, 00:29:17.195 "nvme_iov_md": false 00:29:17.195 }, 00:29:17.195 "memory_domains": [ 00:29:17.195 { 00:29:17.195 "dma_device_id": "system", 00:29:17.195 "dma_device_type": 1 00:29:17.195 }, 00:29:17.195 { 00:29:17.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:17.195 "dma_device_type": 2 00:29:17.195 } 00:29:17.195 ], 00:29:17.195 "driver_specific": {} 00:29:17.195 } 00:29:17.195 ] 00:29:17.195 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.195 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:17.195 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:29:17.195 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:17.195 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:17.195 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:17.195 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:17.195 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:17.195 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:17.195 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:17.195 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:17.195 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:17.195 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:17.195 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.195 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:17.195 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.195 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.452 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:17.452 "name": "Existed_Raid", 00:29:17.452 "uuid": "7e34fd7c-26c4-4455-b143-2dbdbae94949", 00:29:17.452 "strip_size_kb": 64, 00:29:17.452 "state": "online", 00:29:17.452 "raid_level": "raid5f", 00:29:17.452 "superblock": false, 00:29:17.452 "num_base_bdevs": 3, 00:29:17.452 "num_base_bdevs_discovered": 3, 00:29:17.452 "num_base_bdevs_operational": 3, 00:29:17.452 "base_bdevs_list": [ 00:29:17.452 { 00:29:17.452 "name": "NewBaseBdev", 00:29:17.452 "uuid": "f2634a9c-11b6-47b4-b813-59a98c94c215", 00:29:17.452 "is_configured": true, 00:29:17.452 "data_offset": 0, 00:29:17.452 "data_size": 65536 00:29:17.452 }, 00:29:17.452 { 00:29:17.452 "name": "BaseBdev2", 00:29:17.452 "uuid": "3618d1e9-72c3-40a4-850d-dfd79b2da5d6", 00:29:17.452 "is_configured": true, 00:29:17.452 "data_offset": 0, 00:29:17.452 "data_size": 65536 00:29:17.452 }, 00:29:17.452 { 00:29:17.452 "name": "BaseBdev3", 00:29:17.452 "uuid": "280964ac-9bf2-499d-b069-ca11a10b429c", 00:29:17.452 "is_configured": true, 00:29:17.452 "data_offset": 0, 00:29:17.452 "data_size": 65536 00:29:17.452 } 00:29:17.452 ] 00:29:17.452 }' 00:29:17.452 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:17.452 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.724 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:29:17.724 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:17.724 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:17.724 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:17.724 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:17.724 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:17.724 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:17.724 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:17.724 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.724 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.724 [2024-11-20 13:49:20.595493] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:17.724 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:18.013 "name": "Existed_Raid", 00:29:18.013 "aliases": [ 00:29:18.013 "7e34fd7c-26c4-4455-b143-2dbdbae94949" 00:29:18.013 ], 00:29:18.013 "product_name": "Raid Volume", 00:29:18.013 "block_size": 512, 00:29:18.013 "num_blocks": 131072, 00:29:18.013 "uuid": "7e34fd7c-26c4-4455-b143-2dbdbae94949", 00:29:18.013 "assigned_rate_limits": { 00:29:18.013 "rw_ios_per_sec": 0, 00:29:18.013 "rw_mbytes_per_sec": 0, 00:29:18.013 "r_mbytes_per_sec": 0, 00:29:18.013 "w_mbytes_per_sec": 0 00:29:18.013 }, 00:29:18.013 "claimed": false, 00:29:18.013 "zoned": false, 00:29:18.013 "supported_io_types": { 00:29:18.013 "read": true, 00:29:18.013 "write": true, 00:29:18.013 "unmap": false, 00:29:18.013 "flush": false, 00:29:18.013 "reset": true, 00:29:18.013 "nvme_admin": false, 00:29:18.013 "nvme_io": false, 00:29:18.013 "nvme_io_md": false, 00:29:18.013 "write_zeroes": true, 00:29:18.013 "zcopy": false, 00:29:18.013 "get_zone_info": false, 00:29:18.013 "zone_management": false, 00:29:18.013 "zone_append": false, 00:29:18.013 "compare": false, 00:29:18.013 "compare_and_write": false, 00:29:18.013 "abort": false, 00:29:18.013 "seek_hole": false, 00:29:18.013 "seek_data": false, 00:29:18.013 "copy": false, 00:29:18.013 "nvme_iov_md": false 00:29:18.013 }, 00:29:18.013 "driver_specific": { 00:29:18.013 "raid": { 00:29:18.013 "uuid": "7e34fd7c-26c4-4455-b143-2dbdbae94949", 00:29:18.013 "strip_size_kb": 64, 00:29:18.013 "state": "online", 00:29:18.013 "raid_level": "raid5f", 00:29:18.013 "superblock": false, 00:29:18.013 "num_base_bdevs": 3, 00:29:18.013 "num_base_bdevs_discovered": 3, 00:29:18.013 "num_base_bdevs_operational": 3, 00:29:18.013 "base_bdevs_list": [ 00:29:18.013 { 00:29:18.013 "name": "NewBaseBdev", 00:29:18.013 "uuid": "f2634a9c-11b6-47b4-b813-59a98c94c215", 00:29:18.013 "is_configured": true, 00:29:18.013 "data_offset": 0, 00:29:18.013 "data_size": 65536 00:29:18.013 }, 00:29:18.013 { 00:29:18.013 "name": "BaseBdev2", 00:29:18.013 "uuid": "3618d1e9-72c3-40a4-850d-dfd79b2da5d6", 00:29:18.013 "is_configured": true, 00:29:18.013 "data_offset": 0, 00:29:18.013 "data_size": 65536 00:29:18.013 }, 00:29:18.013 { 00:29:18.013 "name": "BaseBdev3", 00:29:18.013 "uuid": "280964ac-9bf2-499d-b069-ca11a10b429c", 00:29:18.013 "is_configured": true, 00:29:18.013 "data_offset": 0, 00:29:18.013 "data_size": 65536 00:29:18.013 } 00:29:18.013 ] 00:29:18.013 } 00:29:18.013 } 00:29:18.013 }' 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:29:18.013 BaseBdev2 00:29:18.013 BaseBdev3' 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.013 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:18.013 [2024-11-20 13:49:20.927339] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:18.013 [2024-11-20 13:49:20.927390] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:18.272 [2024-11-20 13:49:20.927545] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:18.272 [2024-11-20 13:49:20.928017] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:18.272 [2024-11-20 13:49:20.928051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:29:18.272 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.272 13:49:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80447 00:29:18.272 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80447 ']' 00:29:18.272 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80447 00:29:18.272 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:29:18.272 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:18.272 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80447 00:29:18.272 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:18.272 killing process with pid 80447 00:29:18.272 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:18.272 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80447' 00:29:18.272 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80447 00:29:18.272 [2024-11-20 13:49:20.967149] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:18.272 13:49:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80447 00:29:18.530 [2024-11-20 13:49:21.279481] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:19.906 13:49:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:29:19.906 00:29:19.906 real 0m12.353s 00:29:19.906 user 0m20.178s 00:29:19.906 sys 0m1.876s 00:29:19.906 13:49:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:19.906 ************************************ 00:29:19.906 END TEST raid5f_state_function_test 00:29:19.906 ************************************ 00:29:19.906 13:49:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.906 13:49:22 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:29:19.906 13:49:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:19.906 13:49:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:19.906 13:49:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:19.906 ************************************ 00:29:19.906 START TEST raid5f_state_function_test_sb 00:29:19.906 ************************************ 00:29:19.906 13:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:29:19.906 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:29:19.906 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:29:19.906 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:29:19.906 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81080 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:19.907 Process raid pid: 81080 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81080' 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81080 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81080 ']' 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:19.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:19.907 13:49:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:19.907 [2024-11-20 13:49:22.686976] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:29:19.907 [2024-11-20 13:49:22.687177] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.166 [2024-11-20 13:49:22.877512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.166 [2024-11-20 13:49:23.039543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.426 [2024-11-20 13:49:23.314023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:20.426 [2024-11-20 13:49:23.314101] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:20.994 [2024-11-20 13:49:23.710500] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:20.994 [2024-11-20 13:49:23.710618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:20.994 [2024-11-20 13:49:23.710648] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:20.994 [2024-11-20 13:49:23.710676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:20.994 [2024-11-20 13:49:23.710695] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:20.994 [2024-11-20 13:49:23.710722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:20.994 "name": "Existed_Raid", 00:29:20.994 "uuid": "4276e2fa-f4df-432a-a3c8-dd5aa959d6cf", 00:29:20.994 "strip_size_kb": 64, 00:29:20.994 "state": "configuring", 00:29:20.994 "raid_level": "raid5f", 00:29:20.994 "superblock": true, 00:29:20.994 "num_base_bdevs": 3, 00:29:20.994 "num_base_bdevs_discovered": 0, 00:29:20.994 "num_base_bdevs_operational": 3, 00:29:20.994 "base_bdevs_list": [ 00:29:20.994 { 00:29:20.994 "name": "BaseBdev1", 00:29:20.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:20.994 "is_configured": false, 00:29:20.994 "data_offset": 0, 00:29:20.994 "data_size": 0 00:29:20.994 }, 00:29:20.994 { 00:29:20.994 "name": "BaseBdev2", 00:29:20.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:20.994 "is_configured": false, 00:29:20.994 "data_offset": 0, 00:29:20.994 "data_size": 0 00:29:20.994 }, 00:29:20.994 { 00:29:20.994 "name": "BaseBdev3", 00:29:20.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:20.994 "is_configured": false, 00:29:20.994 "data_offset": 0, 00:29:20.994 "data_size": 0 00:29:20.994 } 00:29:20.994 ] 00:29:20.994 }' 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:20.994 13:49:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:21.562 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:21.562 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.562 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:21.562 [2024-11-20 13:49:24.266593] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:21.562 [2024-11-20 13:49:24.266665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:29:21.562 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.562 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:29:21.562 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.562 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:21.562 [2024-11-20 13:49:24.274511] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:21.562 [2024-11-20 13:49:24.274579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:21.562 [2024-11-20 13:49:24.274595] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:21.562 [2024-11-20 13:49:24.274621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:21.562 [2024-11-20 13:49:24.274630] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:21.562 [2024-11-20 13:49:24.274643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:21.562 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.562 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:29:21.562 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.562 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:21.562 [2024-11-20 13:49:24.327700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:21.562 BaseBdev1 00:29:21.562 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.562 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:29:21.562 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:29:21.562 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:21.562 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:21.562 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:21.562 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:21.562 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:21.562 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.562 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:21.562 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.562 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:21.563 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.563 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:21.563 [ 00:29:21.563 { 00:29:21.563 "name": "BaseBdev1", 00:29:21.563 "aliases": [ 00:29:21.563 "4e229d3c-15ea-442d-a3eb-05d5371afa56" 00:29:21.563 ], 00:29:21.563 "product_name": "Malloc disk", 00:29:21.563 "block_size": 512, 00:29:21.563 "num_blocks": 65536, 00:29:21.563 "uuid": "4e229d3c-15ea-442d-a3eb-05d5371afa56", 00:29:21.563 "assigned_rate_limits": { 00:29:21.563 "rw_ios_per_sec": 0, 00:29:21.563 "rw_mbytes_per_sec": 0, 00:29:21.563 "r_mbytes_per_sec": 0, 00:29:21.563 "w_mbytes_per_sec": 0 00:29:21.563 }, 00:29:21.563 "claimed": true, 00:29:21.563 "claim_type": "exclusive_write", 00:29:21.563 "zoned": false, 00:29:21.563 "supported_io_types": { 00:29:21.563 "read": true, 00:29:21.563 "write": true, 00:29:21.563 "unmap": true, 00:29:21.563 "flush": true, 00:29:21.563 "reset": true, 00:29:21.563 "nvme_admin": false, 00:29:21.563 "nvme_io": false, 00:29:21.563 "nvme_io_md": false, 00:29:21.563 "write_zeroes": true, 00:29:21.563 "zcopy": true, 00:29:21.563 "get_zone_info": false, 00:29:21.563 "zone_management": false, 00:29:21.563 "zone_append": false, 00:29:21.563 "compare": false, 00:29:21.563 "compare_and_write": false, 00:29:21.563 "abort": true, 00:29:21.563 "seek_hole": false, 00:29:21.563 "seek_data": false, 00:29:21.563 "copy": true, 00:29:21.563 "nvme_iov_md": false 00:29:21.563 }, 00:29:21.563 "memory_domains": [ 00:29:21.563 { 00:29:21.563 "dma_device_id": "system", 00:29:21.563 "dma_device_type": 1 00:29:21.563 }, 00:29:21.563 { 00:29:21.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:21.563 "dma_device_type": 2 00:29:21.563 } 00:29:21.563 ], 00:29:21.563 "driver_specific": {} 00:29:21.563 } 00:29:21.563 ] 00:29:21.563 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.563 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:21.563 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:21.563 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:21.563 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:21.563 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:21.563 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:21.563 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:21.563 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:21.563 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:21.563 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:21.563 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:21.563 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:21.563 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:21.563 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.563 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:21.563 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.563 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:21.563 "name": "Existed_Raid", 00:29:21.563 "uuid": "92f230be-67d9-4f1f-abe1-274beb426136", 00:29:21.563 "strip_size_kb": 64, 00:29:21.563 "state": "configuring", 00:29:21.563 "raid_level": "raid5f", 00:29:21.563 "superblock": true, 00:29:21.563 "num_base_bdevs": 3, 00:29:21.563 "num_base_bdevs_discovered": 1, 00:29:21.563 "num_base_bdevs_operational": 3, 00:29:21.563 "base_bdevs_list": [ 00:29:21.563 { 00:29:21.563 "name": "BaseBdev1", 00:29:21.563 "uuid": "4e229d3c-15ea-442d-a3eb-05d5371afa56", 00:29:21.563 "is_configured": true, 00:29:21.563 "data_offset": 2048, 00:29:21.563 "data_size": 63488 00:29:21.563 }, 00:29:21.563 { 00:29:21.563 "name": "BaseBdev2", 00:29:21.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:21.563 "is_configured": false, 00:29:21.563 "data_offset": 0, 00:29:21.563 "data_size": 0 00:29:21.563 }, 00:29:21.563 { 00:29:21.563 "name": "BaseBdev3", 00:29:21.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:21.563 "is_configured": false, 00:29:21.563 "data_offset": 0, 00:29:21.563 "data_size": 0 00:29:21.563 } 00:29:21.563 ] 00:29:21.563 }' 00:29:21.563 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:21.563 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:22.131 [2024-11-20 13:49:24.900013] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:22.131 [2024-11-20 13:49:24.900117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:22.131 [2024-11-20 13:49:24.912145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:22.131 [2024-11-20 13:49:24.915194] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:22.131 [2024-11-20 13:49:24.915248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:22.131 [2024-11-20 13:49:24.915267] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:22.131 [2024-11-20 13:49:24.915283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:22.131 "name": "Existed_Raid", 00:29:22.131 "uuid": "09114ea2-aba9-4152-879a-f66c1d5276a3", 00:29:22.131 "strip_size_kb": 64, 00:29:22.131 "state": "configuring", 00:29:22.131 "raid_level": "raid5f", 00:29:22.131 "superblock": true, 00:29:22.131 "num_base_bdevs": 3, 00:29:22.131 "num_base_bdevs_discovered": 1, 00:29:22.131 "num_base_bdevs_operational": 3, 00:29:22.131 "base_bdevs_list": [ 00:29:22.131 { 00:29:22.131 "name": "BaseBdev1", 00:29:22.131 "uuid": "4e229d3c-15ea-442d-a3eb-05d5371afa56", 00:29:22.131 "is_configured": true, 00:29:22.131 "data_offset": 2048, 00:29:22.131 "data_size": 63488 00:29:22.131 }, 00:29:22.131 { 00:29:22.131 "name": "BaseBdev2", 00:29:22.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.131 "is_configured": false, 00:29:22.131 "data_offset": 0, 00:29:22.131 "data_size": 0 00:29:22.131 }, 00:29:22.131 { 00:29:22.131 "name": "BaseBdev3", 00:29:22.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.131 "is_configured": false, 00:29:22.131 "data_offset": 0, 00:29:22.131 "data_size": 0 00:29:22.131 } 00:29:22.131 ] 00:29:22.131 }' 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:22.131 13:49:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:22.699 [2024-11-20 13:49:25.514470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:22.699 BaseBdev2 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:22.699 [ 00:29:22.699 { 00:29:22.699 "name": "BaseBdev2", 00:29:22.699 "aliases": [ 00:29:22.699 "8869b087-717c-4942-a624-7ce6676313af" 00:29:22.699 ], 00:29:22.699 "product_name": "Malloc disk", 00:29:22.699 "block_size": 512, 00:29:22.699 "num_blocks": 65536, 00:29:22.699 "uuid": "8869b087-717c-4942-a624-7ce6676313af", 00:29:22.699 "assigned_rate_limits": { 00:29:22.699 "rw_ios_per_sec": 0, 00:29:22.699 "rw_mbytes_per_sec": 0, 00:29:22.699 "r_mbytes_per_sec": 0, 00:29:22.699 "w_mbytes_per_sec": 0 00:29:22.699 }, 00:29:22.699 "claimed": true, 00:29:22.699 "claim_type": "exclusive_write", 00:29:22.699 "zoned": false, 00:29:22.699 "supported_io_types": { 00:29:22.699 "read": true, 00:29:22.699 "write": true, 00:29:22.699 "unmap": true, 00:29:22.699 "flush": true, 00:29:22.699 "reset": true, 00:29:22.699 "nvme_admin": false, 00:29:22.699 "nvme_io": false, 00:29:22.699 "nvme_io_md": false, 00:29:22.699 "write_zeroes": true, 00:29:22.699 "zcopy": true, 00:29:22.699 "get_zone_info": false, 00:29:22.699 "zone_management": false, 00:29:22.699 "zone_append": false, 00:29:22.699 "compare": false, 00:29:22.699 "compare_and_write": false, 00:29:22.699 "abort": true, 00:29:22.699 "seek_hole": false, 00:29:22.699 "seek_data": false, 00:29:22.699 "copy": true, 00:29:22.699 "nvme_iov_md": false 00:29:22.699 }, 00:29:22.699 "memory_domains": [ 00:29:22.699 { 00:29:22.699 "dma_device_id": "system", 00:29:22.699 "dma_device_type": 1 00:29:22.699 }, 00:29:22.699 { 00:29:22.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:22.699 "dma_device_type": 2 00:29:22.699 } 00:29:22.699 ], 00:29:22.699 "driver_specific": {} 00:29:22.699 } 00:29:22.699 ] 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:22.699 "name": "Existed_Raid", 00:29:22.699 "uuid": "09114ea2-aba9-4152-879a-f66c1d5276a3", 00:29:22.699 "strip_size_kb": 64, 00:29:22.699 "state": "configuring", 00:29:22.699 "raid_level": "raid5f", 00:29:22.699 "superblock": true, 00:29:22.699 "num_base_bdevs": 3, 00:29:22.699 "num_base_bdevs_discovered": 2, 00:29:22.699 "num_base_bdevs_operational": 3, 00:29:22.699 "base_bdevs_list": [ 00:29:22.699 { 00:29:22.699 "name": "BaseBdev1", 00:29:22.699 "uuid": "4e229d3c-15ea-442d-a3eb-05d5371afa56", 00:29:22.699 "is_configured": true, 00:29:22.699 "data_offset": 2048, 00:29:22.699 "data_size": 63488 00:29:22.699 }, 00:29:22.699 { 00:29:22.699 "name": "BaseBdev2", 00:29:22.699 "uuid": "8869b087-717c-4942-a624-7ce6676313af", 00:29:22.699 "is_configured": true, 00:29:22.699 "data_offset": 2048, 00:29:22.699 "data_size": 63488 00:29:22.699 }, 00:29:22.699 { 00:29:22.699 "name": "BaseBdev3", 00:29:22.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.699 "is_configured": false, 00:29:22.699 "data_offset": 0, 00:29:22.699 "data_size": 0 00:29:22.699 } 00:29:22.699 ] 00:29:22.699 }' 00:29:22.699 13:49:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:22.700 13:49:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:23.267 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:29:23.267 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.267 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:23.267 [2024-11-20 13:49:26.179487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:23.267 [2024-11-20 13:49:26.180240] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:23.267 [2024-11-20 13:49:26.180287] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:23.267 BaseBdev3 00:29:23.267 [2024-11-20 13:49:26.180725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:23.267 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.267 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:29:23.267 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:23.526 [2024-11-20 13:49:26.187547] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:23.526 [2024-11-20 13:49:26.187585] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:29:23.526 [2024-11-20 13:49:26.188078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:23.526 [ 00:29:23.526 { 00:29:23.526 "name": "BaseBdev3", 00:29:23.526 "aliases": [ 00:29:23.526 "6445bfb9-d0be-40cd-9699-3ec93cdc484c" 00:29:23.526 ], 00:29:23.526 "product_name": "Malloc disk", 00:29:23.526 "block_size": 512, 00:29:23.526 "num_blocks": 65536, 00:29:23.526 "uuid": "6445bfb9-d0be-40cd-9699-3ec93cdc484c", 00:29:23.526 "assigned_rate_limits": { 00:29:23.526 "rw_ios_per_sec": 0, 00:29:23.526 "rw_mbytes_per_sec": 0, 00:29:23.526 "r_mbytes_per_sec": 0, 00:29:23.526 "w_mbytes_per_sec": 0 00:29:23.526 }, 00:29:23.526 "claimed": true, 00:29:23.526 "claim_type": "exclusive_write", 00:29:23.526 "zoned": false, 00:29:23.526 "supported_io_types": { 00:29:23.526 "read": true, 00:29:23.526 "write": true, 00:29:23.526 "unmap": true, 00:29:23.526 "flush": true, 00:29:23.526 "reset": true, 00:29:23.526 "nvme_admin": false, 00:29:23.526 "nvme_io": false, 00:29:23.526 "nvme_io_md": false, 00:29:23.526 "write_zeroes": true, 00:29:23.526 "zcopy": true, 00:29:23.526 "get_zone_info": false, 00:29:23.526 "zone_management": false, 00:29:23.526 "zone_append": false, 00:29:23.526 "compare": false, 00:29:23.526 "compare_and_write": false, 00:29:23.526 "abort": true, 00:29:23.526 "seek_hole": false, 00:29:23.526 "seek_data": false, 00:29:23.526 "copy": true, 00:29:23.526 "nvme_iov_md": false 00:29:23.526 }, 00:29:23.526 "memory_domains": [ 00:29:23.526 { 00:29:23.526 "dma_device_id": "system", 00:29:23.526 "dma_device_type": 1 00:29:23.526 }, 00:29:23.526 { 00:29:23.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:23.526 "dma_device_type": 2 00:29:23.526 } 00:29:23.526 ], 00:29:23.526 "driver_specific": {} 00:29:23.526 } 00:29:23.526 ] 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.526 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:23.526 "name": "Existed_Raid", 00:29:23.526 "uuid": "09114ea2-aba9-4152-879a-f66c1d5276a3", 00:29:23.526 "strip_size_kb": 64, 00:29:23.526 "state": "online", 00:29:23.526 "raid_level": "raid5f", 00:29:23.526 "superblock": true, 00:29:23.526 "num_base_bdevs": 3, 00:29:23.526 "num_base_bdevs_discovered": 3, 00:29:23.526 "num_base_bdevs_operational": 3, 00:29:23.526 "base_bdevs_list": [ 00:29:23.526 { 00:29:23.526 "name": "BaseBdev1", 00:29:23.526 "uuid": "4e229d3c-15ea-442d-a3eb-05d5371afa56", 00:29:23.526 "is_configured": true, 00:29:23.526 "data_offset": 2048, 00:29:23.527 "data_size": 63488 00:29:23.527 }, 00:29:23.527 { 00:29:23.527 "name": "BaseBdev2", 00:29:23.527 "uuid": "8869b087-717c-4942-a624-7ce6676313af", 00:29:23.527 "is_configured": true, 00:29:23.527 "data_offset": 2048, 00:29:23.527 "data_size": 63488 00:29:23.527 }, 00:29:23.527 { 00:29:23.527 "name": "BaseBdev3", 00:29:23.527 "uuid": "6445bfb9-d0be-40cd-9699-3ec93cdc484c", 00:29:23.527 "is_configured": true, 00:29:23.527 "data_offset": 2048, 00:29:23.527 "data_size": 63488 00:29:23.527 } 00:29:23.527 ] 00:29:23.527 }' 00:29:23.527 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:23.527 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:24.115 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:29:24.115 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:24.115 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:24.115 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:24.115 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:29:24.115 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:24.115 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:24.115 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:24.115 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.115 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:24.115 [2024-11-20 13:49:26.788017] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:24.115 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.115 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:24.115 "name": "Existed_Raid", 00:29:24.115 "aliases": [ 00:29:24.115 "09114ea2-aba9-4152-879a-f66c1d5276a3" 00:29:24.115 ], 00:29:24.115 "product_name": "Raid Volume", 00:29:24.115 "block_size": 512, 00:29:24.115 "num_blocks": 126976, 00:29:24.115 "uuid": "09114ea2-aba9-4152-879a-f66c1d5276a3", 00:29:24.115 "assigned_rate_limits": { 00:29:24.115 "rw_ios_per_sec": 0, 00:29:24.115 "rw_mbytes_per_sec": 0, 00:29:24.115 "r_mbytes_per_sec": 0, 00:29:24.115 "w_mbytes_per_sec": 0 00:29:24.115 }, 00:29:24.115 "claimed": false, 00:29:24.115 "zoned": false, 00:29:24.115 "supported_io_types": { 00:29:24.115 "read": true, 00:29:24.115 "write": true, 00:29:24.115 "unmap": false, 00:29:24.115 "flush": false, 00:29:24.115 "reset": true, 00:29:24.115 "nvme_admin": false, 00:29:24.115 "nvme_io": false, 00:29:24.115 "nvme_io_md": false, 00:29:24.115 "write_zeroes": true, 00:29:24.115 "zcopy": false, 00:29:24.115 "get_zone_info": false, 00:29:24.115 "zone_management": false, 00:29:24.115 "zone_append": false, 00:29:24.115 "compare": false, 00:29:24.115 "compare_and_write": false, 00:29:24.115 "abort": false, 00:29:24.115 "seek_hole": false, 00:29:24.115 "seek_data": false, 00:29:24.115 "copy": false, 00:29:24.115 "nvme_iov_md": false 00:29:24.115 }, 00:29:24.115 "driver_specific": { 00:29:24.115 "raid": { 00:29:24.115 "uuid": "09114ea2-aba9-4152-879a-f66c1d5276a3", 00:29:24.115 "strip_size_kb": 64, 00:29:24.115 "state": "online", 00:29:24.115 "raid_level": "raid5f", 00:29:24.115 "superblock": true, 00:29:24.115 "num_base_bdevs": 3, 00:29:24.115 "num_base_bdevs_discovered": 3, 00:29:24.115 "num_base_bdevs_operational": 3, 00:29:24.115 "base_bdevs_list": [ 00:29:24.115 { 00:29:24.115 "name": "BaseBdev1", 00:29:24.115 "uuid": "4e229d3c-15ea-442d-a3eb-05d5371afa56", 00:29:24.115 "is_configured": true, 00:29:24.115 "data_offset": 2048, 00:29:24.115 "data_size": 63488 00:29:24.115 }, 00:29:24.115 { 00:29:24.115 "name": "BaseBdev2", 00:29:24.115 "uuid": "8869b087-717c-4942-a624-7ce6676313af", 00:29:24.115 "is_configured": true, 00:29:24.115 "data_offset": 2048, 00:29:24.115 "data_size": 63488 00:29:24.115 }, 00:29:24.115 { 00:29:24.115 "name": "BaseBdev3", 00:29:24.115 "uuid": "6445bfb9-d0be-40cd-9699-3ec93cdc484c", 00:29:24.115 "is_configured": true, 00:29:24.115 "data_offset": 2048, 00:29:24.115 "data_size": 63488 00:29:24.115 } 00:29:24.115 ] 00:29:24.115 } 00:29:24.115 } 00:29:24.115 }' 00:29:24.115 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:24.115 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:29:24.115 BaseBdev2 00:29:24.115 BaseBdev3' 00:29:24.115 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:24.115 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:24.115 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:24.115 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:29:24.115 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.115 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:24.115 13:49:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:24.115 13:49:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.115 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:24.115 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:24.115 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:24.115 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:24.115 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.115 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:24.115 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:24.374 [2024-11-20 13:49:27.131881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:24.374 "name": "Existed_Raid", 00:29:24.374 "uuid": "09114ea2-aba9-4152-879a-f66c1d5276a3", 00:29:24.374 "strip_size_kb": 64, 00:29:24.374 "state": "online", 00:29:24.374 "raid_level": "raid5f", 00:29:24.374 "superblock": true, 00:29:24.374 "num_base_bdevs": 3, 00:29:24.374 "num_base_bdevs_discovered": 2, 00:29:24.374 "num_base_bdevs_operational": 2, 00:29:24.374 "base_bdevs_list": [ 00:29:24.374 { 00:29:24.374 "name": null, 00:29:24.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:24.374 "is_configured": false, 00:29:24.374 "data_offset": 0, 00:29:24.374 "data_size": 63488 00:29:24.374 }, 00:29:24.374 { 00:29:24.374 "name": "BaseBdev2", 00:29:24.374 "uuid": "8869b087-717c-4942-a624-7ce6676313af", 00:29:24.374 "is_configured": true, 00:29:24.374 "data_offset": 2048, 00:29:24.374 "data_size": 63488 00:29:24.374 }, 00:29:24.374 { 00:29:24.374 "name": "BaseBdev3", 00:29:24.374 "uuid": "6445bfb9-d0be-40cd-9699-3ec93cdc484c", 00:29:24.374 "is_configured": true, 00:29:24.374 "data_offset": 2048, 00:29:24.374 "data_size": 63488 00:29:24.374 } 00:29:24.374 ] 00:29:24.374 }' 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:24.374 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:24.939 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:29:24.939 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:24.939 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:24.939 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.939 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:24.939 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:24.939 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.939 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:24.939 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:24.939 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:29:24.939 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.939 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:24.939 [2024-11-20 13:49:27.821955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:24.939 [2024-11-20 13:49:27.822191] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:25.198 [2024-11-20 13:49:27.913200] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:25.198 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.198 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:25.198 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:25.198 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:25.198 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.198 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:25.198 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:25.198 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.198 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:25.198 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:25.198 13:49:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:29:25.198 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.198 13:49:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:25.198 [2024-11-20 13:49:27.977285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:29:25.198 [2024-11-20 13:49:27.977385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:29:25.198 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.198 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:25.198 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:25.198 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:25.198 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.198 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:29:25.198 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:25.198 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:25.456 BaseBdev2 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:25.456 [ 00:29:25.456 { 00:29:25.456 "name": "BaseBdev2", 00:29:25.456 "aliases": [ 00:29:25.456 "8c85bfe4-5237-47b0-91aa-5da5b1baf606" 00:29:25.456 ], 00:29:25.456 "product_name": "Malloc disk", 00:29:25.456 "block_size": 512, 00:29:25.456 "num_blocks": 65536, 00:29:25.456 "uuid": "8c85bfe4-5237-47b0-91aa-5da5b1baf606", 00:29:25.456 "assigned_rate_limits": { 00:29:25.456 "rw_ios_per_sec": 0, 00:29:25.456 "rw_mbytes_per_sec": 0, 00:29:25.456 "r_mbytes_per_sec": 0, 00:29:25.456 "w_mbytes_per_sec": 0 00:29:25.456 }, 00:29:25.456 "claimed": false, 00:29:25.456 "zoned": false, 00:29:25.456 "supported_io_types": { 00:29:25.456 "read": true, 00:29:25.456 "write": true, 00:29:25.456 "unmap": true, 00:29:25.456 "flush": true, 00:29:25.456 "reset": true, 00:29:25.456 "nvme_admin": false, 00:29:25.456 "nvme_io": false, 00:29:25.456 "nvme_io_md": false, 00:29:25.456 "write_zeroes": true, 00:29:25.456 "zcopy": true, 00:29:25.456 "get_zone_info": false, 00:29:25.456 "zone_management": false, 00:29:25.456 "zone_append": false, 00:29:25.456 "compare": false, 00:29:25.456 "compare_and_write": false, 00:29:25.456 "abort": true, 00:29:25.456 "seek_hole": false, 00:29:25.456 "seek_data": false, 00:29:25.456 "copy": true, 00:29:25.456 "nvme_iov_md": false 00:29:25.456 }, 00:29:25.456 "memory_domains": [ 00:29:25.456 { 00:29:25.456 "dma_device_id": "system", 00:29:25.456 "dma_device_type": 1 00:29:25.456 }, 00:29:25.456 { 00:29:25.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:25.456 "dma_device_type": 2 00:29:25.456 } 00:29:25.456 ], 00:29:25.456 "driver_specific": {} 00:29:25.456 } 00:29:25.456 ] 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:25.456 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:25.457 BaseBdev3 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:25.457 [ 00:29:25.457 { 00:29:25.457 "name": "BaseBdev3", 00:29:25.457 "aliases": [ 00:29:25.457 "b3028604-a31d-4b52-bdeb-38f44153cbd8" 00:29:25.457 ], 00:29:25.457 "product_name": "Malloc disk", 00:29:25.457 "block_size": 512, 00:29:25.457 "num_blocks": 65536, 00:29:25.457 "uuid": "b3028604-a31d-4b52-bdeb-38f44153cbd8", 00:29:25.457 "assigned_rate_limits": { 00:29:25.457 "rw_ios_per_sec": 0, 00:29:25.457 "rw_mbytes_per_sec": 0, 00:29:25.457 "r_mbytes_per_sec": 0, 00:29:25.457 "w_mbytes_per_sec": 0 00:29:25.457 }, 00:29:25.457 "claimed": false, 00:29:25.457 "zoned": false, 00:29:25.457 "supported_io_types": { 00:29:25.457 "read": true, 00:29:25.457 "write": true, 00:29:25.457 "unmap": true, 00:29:25.457 "flush": true, 00:29:25.457 "reset": true, 00:29:25.457 "nvme_admin": false, 00:29:25.457 "nvme_io": false, 00:29:25.457 "nvme_io_md": false, 00:29:25.457 "write_zeroes": true, 00:29:25.457 "zcopy": true, 00:29:25.457 "get_zone_info": false, 00:29:25.457 "zone_management": false, 00:29:25.457 "zone_append": false, 00:29:25.457 "compare": false, 00:29:25.457 "compare_and_write": false, 00:29:25.457 "abort": true, 00:29:25.457 "seek_hole": false, 00:29:25.457 "seek_data": false, 00:29:25.457 "copy": true, 00:29:25.457 "nvme_iov_md": false 00:29:25.457 }, 00:29:25.457 "memory_domains": [ 00:29:25.457 { 00:29:25.457 "dma_device_id": "system", 00:29:25.457 "dma_device_type": 1 00:29:25.457 }, 00:29:25.457 { 00:29:25.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:25.457 "dma_device_type": 2 00:29:25.457 } 00:29:25.457 ], 00:29:25.457 "driver_specific": {} 00:29:25.457 } 00:29:25.457 ] 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:25.457 [2024-11-20 13:49:28.283740] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:25.457 [2024-11-20 13:49:28.283810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:25.457 [2024-11-20 13:49:28.283846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:25.457 [2024-11-20 13:49:28.286539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:25.457 "name": "Existed_Raid", 00:29:25.457 "uuid": "f0a81fe7-ac47-4310-8001-ba5634776c96", 00:29:25.457 "strip_size_kb": 64, 00:29:25.457 "state": "configuring", 00:29:25.457 "raid_level": "raid5f", 00:29:25.457 "superblock": true, 00:29:25.457 "num_base_bdevs": 3, 00:29:25.457 "num_base_bdevs_discovered": 2, 00:29:25.457 "num_base_bdevs_operational": 3, 00:29:25.457 "base_bdevs_list": [ 00:29:25.457 { 00:29:25.457 "name": "BaseBdev1", 00:29:25.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:25.457 "is_configured": false, 00:29:25.457 "data_offset": 0, 00:29:25.457 "data_size": 0 00:29:25.457 }, 00:29:25.457 { 00:29:25.457 "name": "BaseBdev2", 00:29:25.457 "uuid": "8c85bfe4-5237-47b0-91aa-5da5b1baf606", 00:29:25.457 "is_configured": true, 00:29:25.457 "data_offset": 2048, 00:29:25.457 "data_size": 63488 00:29:25.457 }, 00:29:25.457 { 00:29:25.457 "name": "BaseBdev3", 00:29:25.457 "uuid": "b3028604-a31d-4b52-bdeb-38f44153cbd8", 00:29:25.457 "is_configured": true, 00:29:25.457 "data_offset": 2048, 00:29:25.457 "data_size": 63488 00:29:25.457 } 00:29:25.457 ] 00:29:25.457 }' 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:25.457 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.023 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:29:26.023 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.023 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.023 [2024-11-20 13:49:28.839977] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:26.023 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.023 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:26.023 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:26.023 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:26.023 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:26.023 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:26.023 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:26.023 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:26.023 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:26.023 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:26.023 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:26.023 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:26.023 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.023 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.023 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:26.023 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.023 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:26.023 "name": "Existed_Raid", 00:29:26.023 "uuid": "f0a81fe7-ac47-4310-8001-ba5634776c96", 00:29:26.023 "strip_size_kb": 64, 00:29:26.023 "state": "configuring", 00:29:26.023 "raid_level": "raid5f", 00:29:26.023 "superblock": true, 00:29:26.023 "num_base_bdevs": 3, 00:29:26.023 "num_base_bdevs_discovered": 1, 00:29:26.023 "num_base_bdevs_operational": 3, 00:29:26.023 "base_bdevs_list": [ 00:29:26.023 { 00:29:26.023 "name": "BaseBdev1", 00:29:26.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:26.023 "is_configured": false, 00:29:26.023 "data_offset": 0, 00:29:26.023 "data_size": 0 00:29:26.023 }, 00:29:26.023 { 00:29:26.023 "name": null, 00:29:26.023 "uuid": "8c85bfe4-5237-47b0-91aa-5da5b1baf606", 00:29:26.023 "is_configured": false, 00:29:26.023 "data_offset": 0, 00:29:26.023 "data_size": 63488 00:29:26.023 }, 00:29:26.023 { 00:29:26.023 "name": "BaseBdev3", 00:29:26.023 "uuid": "b3028604-a31d-4b52-bdeb-38f44153cbd8", 00:29:26.023 "is_configured": true, 00:29:26.023 "data_offset": 2048, 00:29:26.023 "data_size": 63488 00:29:26.023 } 00:29:26.023 ] 00:29:26.023 }' 00:29:26.023 13:49:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:26.023 13:49:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.589 13:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:29:26.589 13:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:26.589 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.589 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.589 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.589 13:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:29:26.589 13:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:29:26.589 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.589 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.589 [2024-11-20 13:49:29.481254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:26.589 BaseBdev1 00:29:26.589 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.589 13:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:29:26.589 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:29:26.589 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:26.589 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:26.589 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:26.589 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:26.589 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:26.589 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.589 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.589 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.589 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:26.589 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.589 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.589 [ 00:29:26.589 { 00:29:26.589 "name": "BaseBdev1", 00:29:26.589 "aliases": [ 00:29:26.589 "6e2b2252-5837-4a56-8fb5-6e0701e00e38" 00:29:26.589 ], 00:29:26.589 "product_name": "Malloc disk", 00:29:26.589 "block_size": 512, 00:29:26.589 "num_blocks": 65536, 00:29:26.589 "uuid": "6e2b2252-5837-4a56-8fb5-6e0701e00e38", 00:29:26.589 "assigned_rate_limits": { 00:29:26.589 "rw_ios_per_sec": 0, 00:29:26.847 "rw_mbytes_per_sec": 0, 00:29:26.847 "r_mbytes_per_sec": 0, 00:29:26.847 "w_mbytes_per_sec": 0 00:29:26.847 }, 00:29:26.847 "claimed": true, 00:29:26.847 "claim_type": "exclusive_write", 00:29:26.847 "zoned": false, 00:29:26.847 "supported_io_types": { 00:29:26.847 "read": true, 00:29:26.847 "write": true, 00:29:26.847 "unmap": true, 00:29:26.847 "flush": true, 00:29:26.847 "reset": true, 00:29:26.847 "nvme_admin": false, 00:29:26.847 "nvme_io": false, 00:29:26.847 "nvme_io_md": false, 00:29:26.847 "write_zeroes": true, 00:29:26.847 "zcopy": true, 00:29:26.847 "get_zone_info": false, 00:29:26.847 "zone_management": false, 00:29:26.847 "zone_append": false, 00:29:26.847 "compare": false, 00:29:26.847 "compare_and_write": false, 00:29:26.847 "abort": true, 00:29:26.847 "seek_hole": false, 00:29:26.847 "seek_data": false, 00:29:26.847 "copy": true, 00:29:26.847 "nvme_iov_md": false 00:29:26.847 }, 00:29:26.847 "memory_domains": [ 00:29:26.847 { 00:29:26.847 "dma_device_id": "system", 00:29:26.847 "dma_device_type": 1 00:29:26.847 }, 00:29:26.847 { 00:29:26.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:26.847 "dma_device_type": 2 00:29:26.847 } 00:29:26.847 ], 00:29:26.847 "driver_specific": {} 00:29:26.847 } 00:29:26.847 ] 00:29:26.847 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.847 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:26.847 13:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:26.847 13:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:26.847 13:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:26.847 13:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:26.847 13:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:26.847 13:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:26.847 13:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:26.847 13:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:26.847 13:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:26.847 13:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:26.847 13:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:26.847 13:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:26.847 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.847 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.847 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.847 13:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:26.847 "name": "Existed_Raid", 00:29:26.847 "uuid": "f0a81fe7-ac47-4310-8001-ba5634776c96", 00:29:26.847 "strip_size_kb": 64, 00:29:26.847 "state": "configuring", 00:29:26.847 "raid_level": "raid5f", 00:29:26.847 "superblock": true, 00:29:26.847 "num_base_bdevs": 3, 00:29:26.847 "num_base_bdevs_discovered": 2, 00:29:26.847 "num_base_bdevs_operational": 3, 00:29:26.847 "base_bdevs_list": [ 00:29:26.847 { 00:29:26.847 "name": "BaseBdev1", 00:29:26.847 "uuid": "6e2b2252-5837-4a56-8fb5-6e0701e00e38", 00:29:26.848 "is_configured": true, 00:29:26.848 "data_offset": 2048, 00:29:26.848 "data_size": 63488 00:29:26.848 }, 00:29:26.848 { 00:29:26.848 "name": null, 00:29:26.848 "uuid": "8c85bfe4-5237-47b0-91aa-5da5b1baf606", 00:29:26.848 "is_configured": false, 00:29:26.848 "data_offset": 0, 00:29:26.848 "data_size": 63488 00:29:26.848 }, 00:29:26.848 { 00:29:26.848 "name": "BaseBdev3", 00:29:26.848 "uuid": "b3028604-a31d-4b52-bdeb-38f44153cbd8", 00:29:26.848 "is_configured": true, 00:29:26.848 "data_offset": 2048, 00:29:26.848 "data_size": 63488 00:29:26.848 } 00:29:26.848 ] 00:29:26.848 }' 00:29:26.848 13:49:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:26.848 13:49:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:27.413 [2024-11-20 13:49:30.093499] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.413 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:27.413 "name": "Existed_Raid", 00:29:27.413 "uuid": "f0a81fe7-ac47-4310-8001-ba5634776c96", 00:29:27.413 "strip_size_kb": 64, 00:29:27.413 "state": "configuring", 00:29:27.413 "raid_level": "raid5f", 00:29:27.413 "superblock": true, 00:29:27.413 "num_base_bdevs": 3, 00:29:27.413 "num_base_bdevs_discovered": 1, 00:29:27.413 "num_base_bdevs_operational": 3, 00:29:27.413 "base_bdevs_list": [ 00:29:27.413 { 00:29:27.413 "name": "BaseBdev1", 00:29:27.413 "uuid": "6e2b2252-5837-4a56-8fb5-6e0701e00e38", 00:29:27.413 "is_configured": true, 00:29:27.414 "data_offset": 2048, 00:29:27.414 "data_size": 63488 00:29:27.414 }, 00:29:27.414 { 00:29:27.414 "name": null, 00:29:27.414 "uuid": "8c85bfe4-5237-47b0-91aa-5da5b1baf606", 00:29:27.414 "is_configured": false, 00:29:27.414 "data_offset": 0, 00:29:27.414 "data_size": 63488 00:29:27.414 }, 00:29:27.414 { 00:29:27.414 "name": null, 00:29:27.414 "uuid": "b3028604-a31d-4b52-bdeb-38f44153cbd8", 00:29:27.414 "is_configured": false, 00:29:27.414 "data_offset": 0, 00:29:27.414 "data_size": 63488 00:29:27.414 } 00:29:27.414 ] 00:29:27.414 }' 00:29:27.414 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:27.414 13:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:27.982 [2024-11-20 13:49:30.677710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:27.982 "name": "Existed_Raid", 00:29:27.982 "uuid": "f0a81fe7-ac47-4310-8001-ba5634776c96", 00:29:27.982 "strip_size_kb": 64, 00:29:27.982 "state": "configuring", 00:29:27.982 "raid_level": "raid5f", 00:29:27.982 "superblock": true, 00:29:27.982 "num_base_bdevs": 3, 00:29:27.982 "num_base_bdevs_discovered": 2, 00:29:27.982 "num_base_bdevs_operational": 3, 00:29:27.982 "base_bdevs_list": [ 00:29:27.982 { 00:29:27.982 "name": "BaseBdev1", 00:29:27.982 "uuid": "6e2b2252-5837-4a56-8fb5-6e0701e00e38", 00:29:27.982 "is_configured": true, 00:29:27.982 "data_offset": 2048, 00:29:27.982 "data_size": 63488 00:29:27.982 }, 00:29:27.982 { 00:29:27.982 "name": null, 00:29:27.982 "uuid": "8c85bfe4-5237-47b0-91aa-5da5b1baf606", 00:29:27.982 "is_configured": false, 00:29:27.982 "data_offset": 0, 00:29:27.982 "data_size": 63488 00:29:27.982 }, 00:29:27.982 { 00:29:27.982 "name": "BaseBdev3", 00:29:27.982 "uuid": "b3028604-a31d-4b52-bdeb-38f44153cbd8", 00:29:27.982 "is_configured": true, 00:29:27.982 "data_offset": 2048, 00:29:27.982 "data_size": 63488 00:29:27.982 } 00:29:27.982 ] 00:29:27.982 }' 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:27.982 13:49:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.550 [2024-11-20 13:49:31.265904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:28.550 "name": "Existed_Raid", 00:29:28.550 "uuid": "f0a81fe7-ac47-4310-8001-ba5634776c96", 00:29:28.550 "strip_size_kb": 64, 00:29:28.550 "state": "configuring", 00:29:28.550 "raid_level": "raid5f", 00:29:28.550 "superblock": true, 00:29:28.550 "num_base_bdevs": 3, 00:29:28.550 "num_base_bdevs_discovered": 1, 00:29:28.550 "num_base_bdevs_operational": 3, 00:29:28.550 "base_bdevs_list": [ 00:29:28.550 { 00:29:28.550 "name": null, 00:29:28.550 "uuid": "6e2b2252-5837-4a56-8fb5-6e0701e00e38", 00:29:28.550 "is_configured": false, 00:29:28.550 "data_offset": 0, 00:29:28.550 "data_size": 63488 00:29:28.550 }, 00:29:28.550 { 00:29:28.550 "name": null, 00:29:28.550 "uuid": "8c85bfe4-5237-47b0-91aa-5da5b1baf606", 00:29:28.550 "is_configured": false, 00:29:28.550 "data_offset": 0, 00:29:28.550 "data_size": 63488 00:29:28.550 }, 00:29:28.550 { 00:29:28.550 "name": "BaseBdev3", 00:29:28.550 "uuid": "b3028604-a31d-4b52-bdeb-38f44153cbd8", 00:29:28.550 "is_configured": true, 00:29:28.550 "data_offset": 2048, 00:29:28.550 "data_size": 63488 00:29:28.550 } 00:29:28.550 ] 00:29:28.550 }' 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:28.550 13:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.117 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:29.118 13:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.118 13:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.118 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:29:29.118 13:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.118 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:29:29.118 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:29:29.118 13:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.118 13:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.118 [2024-11-20 13:49:31.974620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:29.118 13:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.118 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:29:29.118 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:29.118 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:29.118 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:29.118 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:29.118 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:29.118 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:29.118 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:29.118 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:29.118 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:29.118 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:29.118 13:49:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:29.118 13:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.118 13:49:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.118 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.377 13:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:29.377 "name": "Existed_Raid", 00:29:29.377 "uuid": "f0a81fe7-ac47-4310-8001-ba5634776c96", 00:29:29.377 "strip_size_kb": 64, 00:29:29.377 "state": "configuring", 00:29:29.377 "raid_level": "raid5f", 00:29:29.377 "superblock": true, 00:29:29.377 "num_base_bdevs": 3, 00:29:29.377 "num_base_bdevs_discovered": 2, 00:29:29.377 "num_base_bdevs_operational": 3, 00:29:29.377 "base_bdevs_list": [ 00:29:29.377 { 00:29:29.377 "name": null, 00:29:29.377 "uuid": "6e2b2252-5837-4a56-8fb5-6e0701e00e38", 00:29:29.377 "is_configured": false, 00:29:29.377 "data_offset": 0, 00:29:29.377 "data_size": 63488 00:29:29.377 }, 00:29:29.377 { 00:29:29.377 "name": "BaseBdev2", 00:29:29.377 "uuid": "8c85bfe4-5237-47b0-91aa-5da5b1baf606", 00:29:29.377 "is_configured": true, 00:29:29.377 "data_offset": 2048, 00:29:29.377 "data_size": 63488 00:29:29.377 }, 00:29:29.377 { 00:29:29.377 "name": "BaseBdev3", 00:29:29.377 "uuid": "b3028604-a31d-4b52-bdeb-38f44153cbd8", 00:29:29.377 "is_configured": true, 00:29:29.377 "data_offset": 2048, 00:29:29.377 "data_size": 63488 00:29:29.377 } 00:29:29.377 ] 00:29:29.377 }' 00:29:29.377 13:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:29.377 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.636 13:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:29:29.636 13:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:29.636 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.636 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.636 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.895 13:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:29:29.895 13:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:29.895 13:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:29:29.895 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.895 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.895 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.895 13:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6e2b2252-5837-4a56-8fb5-6e0701e00e38 00:29:29.895 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.895 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.895 [2024-11-20 13:49:32.671541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:29:29.895 [2024-11-20 13:49:32.671933] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:29:29.895 [2024-11-20 13:49:32.671958] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:29.895 [2024-11-20 13:49:32.672283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:29:29.895 NewBaseBdev 00:29:29.895 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.895 13:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:29:29.895 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:29:29.895 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:29.895 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:29.895 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:29.895 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:29.895 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:29.895 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.896 [2024-11-20 13:49:32.677618] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:29:29.896 [2024-11-20 13:49:32.677664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:29:29.896 [2024-11-20 13:49:32.678070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.896 [ 00:29:29.896 { 00:29:29.896 "name": "NewBaseBdev", 00:29:29.896 "aliases": [ 00:29:29.896 "6e2b2252-5837-4a56-8fb5-6e0701e00e38" 00:29:29.896 ], 00:29:29.896 "product_name": "Malloc disk", 00:29:29.896 "block_size": 512, 00:29:29.896 "num_blocks": 65536, 00:29:29.896 "uuid": "6e2b2252-5837-4a56-8fb5-6e0701e00e38", 00:29:29.896 "assigned_rate_limits": { 00:29:29.896 "rw_ios_per_sec": 0, 00:29:29.896 "rw_mbytes_per_sec": 0, 00:29:29.896 "r_mbytes_per_sec": 0, 00:29:29.896 "w_mbytes_per_sec": 0 00:29:29.896 }, 00:29:29.896 "claimed": true, 00:29:29.896 "claim_type": "exclusive_write", 00:29:29.896 "zoned": false, 00:29:29.896 "supported_io_types": { 00:29:29.896 "read": true, 00:29:29.896 "write": true, 00:29:29.896 "unmap": true, 00:29:29.896 "flush": true, 00:29:29.896 "reset": true, 00:29:29.896 "nvme_admin": false, 00:29:29.896 "nvme_io": false, 00:29:29.896 "nvme_io_md": false, 00:29:29.896 "write_zeroes": true, 00:29:29.896 "zcopy": true, 00:29:29.896 "get_zone_info": false, 00:29:29.896 "zone_management": false, 00:29:29.896 "zone_append": false, 00:29:29.896 "compare": false, 00:29:29.896 "compare_and_write": false, 00:29:29.896 "abort": true, 00:29:29.896 "seek_hole": false, 00:29:29.896 "seek_data": false, 00:29:29.896 "copy": true, 00:29:29.896 "nvme_iov_md": false 00:29:29.896 }, 00:29:29.896 "memory_domains": [ 00:29:29.896 { 00:29:29.896 "dma_device_id": "system", 00:29:29.896 "dma_device_type": 1 00:29:29.896 }, 00:29:29.896 { 00:29:29.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:29.896 "dma_device_type": 2 00:29:29.896 } 00:29:29.896 ], 00:29:29.896 "driver_specific": {} 00:29:29.896 } 00:29:29.896 ] 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:29.896 "name": "Existed_Raid", 00:29:29.896 "uuid": "f0a81fe7-ac47-4310-8001-ba5634776c96", 00:29:29.896 "strip_size_kb": 64, 00:29:29.896 "state": "online", 00:29:29.896 "raid_level": "raid5f", 00:29:29.896 "superblock": true, 00:29:29.896 "num_base_bdevs": 3, 00:29:29.896 "num_base_bdevs_discovered": 3, 00:29:29.896 "num_base_bdevs_operational": 3, 00:29:29.896 "base_bdevs_list": [ 00:29:29.896 { 00:29:29.896 "name": "NewBaseBdev", 00:29:29.896 "uuid": "6e2b2252-5837-4a56-8fb5-6e0701e00e38", 00:29:29.896 "is_configured": true, 00:29:29.896 "data_offset": 2048, 00:29:29.896 "data_size": 63488 00:29:29.896 }, 00:29:29.896 { 00:29:29.896 "name": "BaseBdev2", 00:29:29.896 "uuid": "8c85bfe4-5237-47b0-91aa-5da5b1baf606", 00:29:29.896 "is_configured": true, 00:29:29.896 "data_offset": 2048, 00:29:29.896 "data_size": 63488 00:29:29.896 }, 00:29:29.896 { 00:29:29.896 "name": "BaseBdev3", 00:29:29.896 "uuid": "b3028604-a31d-4b52-bdeb-38f44153cbd8", 00:29:29.896 "is_configured": true, 00:29:29.896 "data_offset": 2048, 00:29:29.896 "data_size": 63488 00:29:29.896 } 00:29:29.896 ] 00:29:29.896 }' 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:29.896 13:49:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.463 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:29:30.463 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:30.464 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:30.464 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:30.464 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:29:30.464 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:30.464 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:30.464 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:30.464 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.464 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.464 [2024-11-20 13:49:33.253103] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:30.464 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.464 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:30.464 "name": "Existed_Raid", 00:29:30.464 "aliases": [ 00:29:30.464 "f0a81fe7-ac47-4310-8001-ba5634776c96" 00:29:30.464 ], 00:29:30.464 "product_name": "Raid Volume", 00:29:30.464 "block_size": 512, 00:29:30.464 "num_blocks": 126976, 00:29:30.464 "uuid": "f0a81fe7-ac47-4310-8001-ba5634776c96", 00:29:30.464 "assigned_rate_limits": { 00:29:30.464 "rw_ios_per_sec": 0, 00:29:30.464 "rw_mbytes_per_sec": 0, 00:29:30.464 "r_mbytes_per_sec": 0, 00:29:30.464 "w_mbytes_per_sec": 0 00:29:30.464 }, 00:29:30.464 "claimed": false, 00:29:30.464 "zoned": false, 00:29:30.464 "supported_io_types": { 00:29:30.464 "read": true, 00:29:30.464 "write": true, 00:29:30.464 "unmap": false, 00:29:30.464 "flush": false, 00:29:30.464 "reset": true, 00:29:30.464 "nvme_admin": false, 00:29:30.464 "nvme_io": false, 00:29:30.464 "nvme_io_md": false, 00:29:30.464 "write_zeroes": true, 00:29:30.464 "zcopy": false, 00:29:30.464 "get_zone_info": false, 00:29:30.464 "zone_management": false, 00:29:30.464 "zone_append": false, 00:29:30.464 "compare": false, 00:29:30.464 "compare_and_write": false, 00:29:30.464 "abort": false, 00:29:30.464 "seek_hole": false, 00:29:30.464 "seek_data": false, 00:29:30.464 "copy": false, 00:29:30.464 "nvme_iov_md": false 00:29:30.464 }, 00:29:30.464 "driver_specific": { 00:29:30.464 "raid": { 00:29:30.464 "uuid": "f0a81fe7-ac47-4310-8001-ba5634776c96", 00:29:30.464 "strip_size_kb": 64, 00:29:30.464 "state": "online", 00:29:30.464 "raid_level": "raid5f", 00:29:30.464 "superblock": true, 00:29:30.464 "num_base_bdevs": 3, 00:29:30.464 "num_base_bdevs_discovered": 3, 00:29:30.464 "num_base_bdevs_operational": 3, 00:29:30.464 "base_bdevs_list": [ 00:29:30.464 { 00:29:30.464 "name": "NewBaseBdev", 00:29:30.464 "uuid": "6e2b2252-5837-4a56-8fb5-6e0701e00e38", 00:29:30.464 "is_configured": true, 00:29:30.464 "data_offset": 2048, 00:29:30.464 "data_size": 63488 00:29:30.464 }, 00:29:30.464 { 00:29:30.464 "name": "BaseBdev2", 00:29:30.464 "uuid": "8c85bfe4-5237-47b0-91aa-5da5b1baf606", 00:29:30.464 "is_configured": true, 00:29:30.464 "data_offset": 2048, 00:29:30.464 "data_size": 63488 00:29:30.464 }, 00:29:30.464 { 00:29:30.464 "name": "BaseBdev3", 00:29:30.464 "uuid": "b3028604-a31d-4b52-bdeb-38f44153cbd8", 00:29:30.464 "is_configured": true, 00:29:30.464 "data_offset": 2048, 00:29:30.464 "data_size": 63488 00:29:30.464 } 00:29:30.464 ] 00:29:30.464 } 00:29:30.464 } 00:29:30.464 }' 00:29:30.464 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:30.464 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:29:30.464 BaseBdev2 00:29:30.464 BaseBdev3' 00:29:30.464 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:30.722 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:30.722 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:30.722 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:29:30.722 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:30.722 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.722 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.722 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.722 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:30.722 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:30.722 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.723 [2024-11-20 13:49:33.576857] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:30.723 [2024-11-20 13:49:33.576948] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:30.723 [2024-11-20 13:49:33.577068] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:30.723 [2024-11-20 13:49:33.577507] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:30.723 [2024-11-20 13:49:33.577543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81080 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81080 ']' 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 81080 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81080 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:30.723 killing process with pid 81080 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81080' 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 81080 00:29:30.723 [2024-11-20 13:49:33.621651] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:30.723 13:49:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 81080 00:29:31.290 [2024-11-20 13:49:33.936924] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:32.667 13:49:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:29:32.667 00:29:32.667 real 0m12.639s 00:29:32.667 user 0m20.701s 00:29:32.667 sys 0m1.825s 00:29:32.667 13:49:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:32.667 13:49:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.667 ************************************ 00:29:32.667 END TEST raid5f_state_function_test_sb 00:29:32.667 ************************************ 00:29:32.667 13:49:35 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:29:32.667 13:49:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:32.667 13:49:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:32.667 13:49:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:32.667 ************************************ 00:29:32.667 START TEST raid5f_superblock_test 00:29:32.667 ************************************ 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81720 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81720 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81720 ']' 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:32.667 13:49:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:32.667 [2024-11-20 13:49:35.376153] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:29:32.667 [2024-11-20 13:49:35.376330] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81720 ] 00:29:32.668 [2024-11-20 13:49:35.560901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.927 [2024-11-20 13:49:35.728638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.186 [2024-11-20 13:49:35.982259] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:33.186 [2024-11-20 13:49:35.982318] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.755 malloc1 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.755 [2024-11-20 13:49:36.476730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:33.755 [2024-11-20 13:49:36.476855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:33.755 [2024-11-20 13:49:36.476892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:33.755 [2024-11-20 13:49:36.476941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:33.755 [2024-11-20 13:49:36.480194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:33.755 [2024-11-20 13:49:36.480250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:33.755 pt1 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.755 malloc2 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.755 [2024-11-20 13:49:36.541598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:33.755 [2024-11-20 13:49:36.541680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:33.755 [2024-11-20 13:49:36.541719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:33.755 [2024-11-20 13:49:36.541737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:33.755 [2024-11-20 13:49:36.544987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:33.755 [2024-11-20 13:49:36.545068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:33.755 pt2 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.755 malloc3 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.755 [2024-11-20 13:49:36.617777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:33.755 [2024-11-20 13:49:36.617862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:33.755 [2024-11-20 13:49:36.617919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:29:33.755 [2024-11-20 13:49:36.617943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:33.755 [2024-11-20 13:49:36.621023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:33.755 [2024-11-20 13:49:36.621069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:33.755 pt3 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.755 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.755 [2024-11-20 13:49:36.629963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:33.755 [2024-11-20 13:49:36.632691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:33.755 [2024-11-20 13:49:36.632813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:33.755 [2024-11-20 13:49:36.633122] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:33.755 [2024-11-20 13:49:36.633165] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:33.755 [2024-11-20 13:49:36.633478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:33.755 [2024-11-20 13:49:36.639156] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:33.755 [2024-11-20 13:49:36.639185] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:33.755 [2024-11-20 13:49:36.639450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:33.756 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.756 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:33.756 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:33.756 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:33.756 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:33.756 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:33.756 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:33.756 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:33.756 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:33.756 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:33.756 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:33.756 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:33.756 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.756 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.756 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:33.756 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.015 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:34.015 "name": "raid_bdev1", 00:29:34.015 "uuid": "c0778faa-4b68-493a-b760-c8ae47fbd080", 00:29:34.015 "strip_size_kb": 64, 00:29:34.015 "state": "online", 00:29:34.015 "raid_level": "raid5f", 00:29:34.015 "superblock": true, 00:29:34.015 "num_base_bdevs": 3, 00:29:34.015 "num_base_bdevs_discovered": 3, 00:29:34.015 "num_base_bdevs_operational": 3, 00:29:34.015 "base_bdevs_list": [ 00:29:34.015 { 00:29:34.015 "name": "pt1", 00:29:34.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:34.015 "is_configured": true, 00:29:34.015 "data_offset": 2048, 00:29:34.015 "data_size": 63488 00:29:34.015 }, 00:29:34.015 { 00:29:34.015 "name": "pt2", 00:29:34.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:34.015 "is_configured": true, 00:29:34.015 "data_offset": 2048, 00:29:34.015 "data_size": 63488 00:29:34.015 }, 00:29:34.015 { 00:29:34.015 "name": "pt3", 00:29:34.015 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:34.015 "is_configured": true, 00:29:34.015 "data_offset": 2048, 00:29:34.015 "data_size": 63488 00:29:34.015 } 00:29:34.015 ] 00:29:34.015 }' 00:29:34.015 13:49:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:34.015 13:49:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.273 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:29:34.273 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:29:34.273 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:34.273 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:34.273 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:34.273 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:34.273 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:34.273 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.273 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.273 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:34.273 [2024-11-20 13:49:37.182397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:34.532 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.532 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:34.532 "name": "raid_bdev1", 00:29:34.532 "aliases": [ 00:29:34.532 "c0778faa-4b68-493a-b760-c8ae47fbd080" 00:29:34.532 ], 00:29:34.532 "product_name": "Raid Volume", 00:29:34.532 "block_size": 512, 00:29:34.532 "num_blocks": 126976, 00:29:34.532 "uuid": "c0778faa-4b68-493a-b760-c8ae47fbd080", 00:29:34.532 "assigned_rate_limits": { 00:29:34.532 "rw_ios_per_sec": 0, 00:29:34.532 "rw_mbytes_per_sec": 0, 00:29:34.532 "r_mbytes_per_sec": 0, 00:29:34.532 "w_mbytes_per_sec": 0 00:29:34.532 }, 00:29:34.532 "claimed": false, 00:29:34.532 "zoned": false, 00:29:34.532 "supported_io_types": { 00:29:34.532 "read": true, 00:29:34.532 "write": true, 00:29:34.532 "unmap": false, 00:29:34.532 "flush": false, 00:29:34.532 "reset": true, 00:29:34.532 "nvme_admin": false, 00:29:34.532 "nvme_io": false, 00:29:34.532 "nvme_io_md": false, 00:29:34.532 "write_zeroes": true, 00:29:34.532 "zcopy": false, 00:29:34.532 "get_zone_info": false, 00:29:34.532 "zone_management": false, 00:29:34.532 "zone_append": false, 00:29:34.532 "compare": false, 00:29:34.532 "compare_and_write": false, 00:29:34.532 "abort": false, 00:29:34.532 "seek_hole": false, 00:29:34.532 "seek_data": false, 00:29:34.532 "copy": false, 00:29:34.532 "nvme_iov_md": false 00:29:34.532 }, 00:29:34.532 "driver_specific": { 00:29:34.532 "raid": { 00:29:34.532 "uuid": "c0778faa-4b68-493a-b760-c8ae47fbd080", 00:29:34.532 "strip_size_kb": 64, 00:29:34.532 "state": "online", 00:29:34.532 "raid_level": "raid5f", 00:29:34.532 "superblock": true, 00:29:34.532 "num_base_bdevs": 3, 00:29:34.532 "num_base_bdevs_discovered": 3, 00:29:34.532 "num_base_bdevs_operational": 3, 00:29:34.532 "base_bdevs_list": [ 00:29:34.532 { 00:29:34.532 "name": "pt1", 00:29:34.532 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:34.532 "is_configured": true, 00:29:34.532 "data_offset": 2048, 00:29:34.532 "data_size": 63488 00:29:34.532 }, 00:29:34.532 { 00:29:34.532 "name": "pt2", 00:29:34.532 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:34.532 "is_configured": true, 00:29:34.532 "data_offset": 2048, 00:29:34.532 "data_size": 63488 00:29:34.532 }, 00:29:34.532 { 00:29:34.532 "name": "pt3", 00:29:34.532 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:34.532 "is_configured": true, 00:29:34.532 "data_offset": 2048, 00:29:34.532 "data_size": 63488 00:29:34.532 } 00:29:34.532 ] 00:29:34.532 } 00:29:34.532 } 00:29:34.532 }' 00:29:34.532 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:34.532 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:29:34.532 pt2 00:29:34.532 pt3' 00:29:34.532 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:34.532 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:34.532 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:34.532 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:29:34.532 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.532 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.532 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:34.532 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.532 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:34.532 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:34.532 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:34.532 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:29:34.532 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:34.532 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.532 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.532 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:29:34.792 [2024-11-20 13:49:37.538460] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c0778faa-4b68-493a-b760-c8ae47fbd080 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c0778faa-4b68-493a-b760-c8ae47fbd080 ']' 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.792 [2024-11-20 13:49:37.590235] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:34.792 [2024-11-20 13:49:37.590284] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:34.792 [2024-11-20 13:49:37.590415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:34.792 [2024-11-20 13:49:37.590543] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:34.792 [2024-11-20 13:49:37.590572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.792 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.051 [2024-11-20 13:49:37.746372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:29:35.051 [2024-11-20 13:49:37.749428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:29:35.051 [2024-11-20 13:49:37.749559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:29:35.051 [2024-11-20 13:49:37.749661] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:29:35.051 [2024-11-20 13:49:37.749784] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:29:35.051 [2024-11-20 13:49:37.749831] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:29:35.051 [2024-11-20 13:49:37.749879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:35.051 [2024-11-20 13:49:37.749901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:29:35.051 request: 00:29:35.051 { 00:29:35.051 "name": "raid_bdev1", 00:29:35.051 "raid_level": "raid5f", 00:29:35.051 "base_bdevs": [ 00:29:35.051 "malloc1", 00:29:35.051 "malloc2", 00:29:35.051 "malloc3" 00:29:35.051 ], 00:29:35.051 "strip_size_kb": 64, 00:29:35.051 "superblock": false, 00:29:35.051 "method": "bdev_raid_create", 00:29:35.051 "req_id": 1 00:29:35.051 } 00:29:35.051 Got JSON-RPC error response 00:29:35.051 response: 00:29:35.051 { 00:29:35.051 "code": -17, 00:29:35.051 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:29:35.051 } 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.051 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.051 [2024-11-20 13:49:37.814602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:35.051 [2024-11-20 13:49:37.814726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:35.051 [2024-11-20 13:49:37.814762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:29:35.052 [2024-11-20 13:49:37.814780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:35.052 [2024-11-20 13:49:37.818479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:35.052 [2024-11-20 13:49:37.818565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:35.052 [2024-11-20 13:49:37.818688] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:35.052 [2024-11-20 13:49:37.818781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:35.052 pt1 00:29:35.052 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.052 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:29:35.052 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:35.052 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:35.052 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:35.052 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:35.052 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:35.052 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:35.052 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:35.052 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:35.052 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:35.052 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:35.052 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.052 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.052 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.052 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.052 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:35.052 "name": "raid_bdev1", 00:29:35.052 "uuid": "c0778faa-4b68-493a-b760-c8ae47fbd080", 00:29:35.052 "strip_size_kb": 64, 00:29:35.052 "state": "configuring", 00:29:35.052 "raid_level": "raid5f", 00:29:35.052 "superblock": true, 00:29:35.052 "num_base_bdevs": 3, 00:29:35.052 "num_base_bdevs_discovered": 1, 00:29:35.052 "num_base_bdevs_operational": 3, 00:29:35.052 "base_bdevs_list": [ 00:29:35.052 { 00:29:35.052 "name": "pt1", 00:29:35.052 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:35.052 "is_configured": true, 00:29:35.052 "data_offset": 2048, 00:29:35.052 "data_size": 63488 00:29:35.052 }, 00:29:35.052 { 00:29:35.052 "name": null, 00:29:35.052 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:35.052 "is_configured": false, 00:29:35.052 "data_offset": 2048, 00:29:35.052 "data_size": 63488 00:29:35.052 }, 00:29:35.052 { 00:29:35.052 "name": null, 00:29:35.052 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:35.052 "is_configured": false, 00:29:35.052 "data_offset": 2048, 00:29:35.052 "data_size": 63488 00:29:35.052 } 00:29:35.052 ] 00:29:35.052 }' 00:29:35.052 13:49:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:35.052 13:49:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.619 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:29:35.619 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:35.619 13:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.620 [2024-11-20 13:49:38.367171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:35.620 [2024-11-20 13:49:38.367263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:35.620 [2024-11-20 13:49:38.367316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:29:35.620 [2024-11-20 13:49:38.367335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:35.620 [2024-11-20 13:49:38.368092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:35.620 [2024-11-20 13:49:38.368147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:35.620 [2024-11-20 13:49:38.368277] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:35.620 [2024-11-20 13:49:38.368324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:35.620 pt2 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.620 [2024-11-20 13:49:38.375079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:35.620 "name": "raid_bdev1", 00:29:35.620 "uuid": "c0778faa-4b68-493a-b760-c8ae47fbd080", 00:29:35.620 "strip_size_kb": 64, 00:29:35.620 "state": "configuring", 00:29:35.620 "raid_level": "raid5f", 00:29:35.620 "superblock": true, 00:29:35.620 "num_base_bdevs": 3, 00:29:35.620 "num_base_bdevs_discovered": 1, 00:29:35.620 "num_base_bdevs_operational": 3, 00:29:35.620 "base_bdevs_list": [ 00:29:35.620 { 00:29:35.620 "name": "pt1", 00:29:35.620 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:35.620 "is_configured": true, 00:29:35.620 "data_offset": 2048, 00:29:35.620 "data_size": 63488 00:29:35.620 }, 00:29:35.620 { 00:29:35.620 "name": null, 00:29:35.620 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:35.620 "is_configured": false, 00:29:35.620 "data_offset": 0, 00:29:35.620 "data_size": 63488 00:29:35.620 }, 00:29:35.620 { 00:29:35.620 "name": null, 00:29:35.620 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:35.620 "is_configured": false, 00:29:35.620 "data_offset": 2048, 00:29:35.620 "data_size": 63488 00:29:35.620 } 00:29:35.620 ] 00:29:35.620 }' 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:35.620 13:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.187 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:29:36.187 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.188 [2024-11-20 13:49:38.923315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:36.188 [2024-11-20 13:49:38.923496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:36.188 [2024-11-20 13:49:38.923543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:29:36.188 [2024-11-20 13:49:38.923563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:36.188 [2024-11-20 13:49:38.924614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:36.188 [2024-11-20 13:49:38.924672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:36.188 [2024-11-20 13:49:38.924799] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:36.188 [2024-11-20 13:49:38.924840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:36.188 pt2 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.188 [2024-11-20 13:49:38.935247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:36.188 [2024-11-20 13:49:38.935308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:36.188 [2024-11-20 13:49:38.935333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:29:36.188 [2024-11-20 13:49:38.935352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:36.188 [2024-11-20 13:49:38.935875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:36.188 [2024-11-20 13:49:38.935936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:36.188 [2024-11-20 13:49:38.936017] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:29:36.188 [2024-11-20 13:49:38.936051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:36.188 [2024-11-20 13:49:38.936222] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:36.188 [2024-11-20 13:49:38.936247] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:36.188 [2024-11-20 13:49:38.936590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:36.188 [2024-11-20 13:49:38.942108] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:36.188 pt3 00:29:36.188 [2024-11-20 13:49:38.942286] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:29:36.188 [2024-11-20 13:49:38.942534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:36.188 13:49:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.188 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:36.188 "name": "raid_bdev1", 00:29:36.188 "uuid": "c0778faa-4b68-493a-b760-c8ae47fbd080", 00:29:36.188 "strip_size_kb": 64, 00:29:36.188 "state": "online", 00:29:36.188 "raid_level": "raid5f", 00:29:36.188 "superblock": true, 00:29:36.188 "num_base_bdevs": 3, 00:29:36.188 "num_base_bdevs_discovered": 3, 00:29:36.188 "num_base_bdevs_operational": 3, 00:29:36.188 "base_bdevs_list": [ 00:29:36.188 { 00:29:36.188 "name": "pt1", 00:29:36.188 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:36.188 "is_configured": true, 00:29:36.188 "data_offset": 2048, 00:29:36.188 "data_size": 63488 00:29:36.188 }, 00:29:36.188 { 00:29:36.188 "name": "pt2", 00:29:36.188 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:36.188 "is_configured": true, 00:29:36.188 "data_offset": 2048, 00:29:36.188 "data_size": 63488 00:29:36.188 }, 00:29:36.188 { 00:29:36.188 "name": "pt3", 00:29:36.188 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:36.188 "is_configured": true, 00:29:36.188 "data_offset": 2048, 00:29:36.188 "data_size": 63488 00:29:36.188 } 00:29:36.188 ] 00:29:36.188 }' 00:29:36.188 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:36.188 13:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.770 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:29:36.770 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:29:36.770 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:36.770 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:36.770 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:36.770 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:36.770 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:36.770 13:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.770 13:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.770 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:36.770 [2024-11-20 13:49:39.497745] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:36.770 13:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.770 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:36.770 "name": "raid_bdev1", 00:29:36.770 "aliases": [ 00:29:36.770 "c0778faa-4b68-493a-b760-c8ae47fbd080" 00:29:36.770 ], 00:29:36.770 "product_name": "Raid Volume", 00:29:36.770 "block_size": 512, 00:29:36.770 "num_blocks": 126976, 00:29:36.770 "uuid": "c0778faa-4b68-493a-b760-c8ae47fbd080", 00:29:36.770 "assigned_rate_limits": { 00:29:36.770 "rw_ios_per_sec": 0, 00:29:36.770 "rw_mbytes_per_sec": 0, 00:29:36.770 "r_mbytes_per_sec": 0, 00:29:36.770 "w_mbytes_per_sec": 0 00:29:36.770 }, 00:29:36.770 "claimed": false, 00:29:36.770 "zoned": false, 00:29:36.770 "supported_io_types": { 00:29:36.770 "read": true, 00:29:36.770 "write": true, 00:29:36.770 "unmap": false, 00:29:36.770 "flush": false, 00:29:36.770 "reset": true, 00:29:36.770 "nvme_admin": false, 00:29:36.770 "nvme_io": false, 00:29:36.770 "nvme_io_md": false, 00:29:36.770 "write_zeroes": true, 00:29:36.770 "zcopy": false, 00:29:36.770 "get_zone_info": false, 00:29:36.770 "zone_management": false, 00:29:36.770 "zone_append": false, 00:29:36.770 "compare": false, 00:29:36.770 "compare_and_write": false, 00:29:36.770 "abort": false, 00:29:36.770 "seek_hole": false, 00:29:36.770 "seek_data": false, 00:29:36.770 "copy": false, 00:29:36.770 "nvme_iov_md": false 00:29:36.770 }, 00:29:36.770 "driver_specific": { 00:29:36.770 "raid": { 00:29:36.770 "uuid": "c0778faa-4b68-493a-b760-c8ae47fbd080", 00:29:36.770 "strip_size_kb": 64, 00:29:36.770 "state": "online", 00:29:36.770 "raid_level": "raid5f", 00:29:36.770 "superblock": true, 00:29:36.770 "num_base_bdevs": 3, 00:29:36.770 "num_base_bdevs_discovered": 3, 00:29:36.770 "num_base_bdevs_operational": 3, 00:29:36.770 "base_bdevs_list": [ 00:29:36.770 { 00:29:36.770 "name": "pt1", 00:29:36.770 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:36.770 "is_configured": true, 00:29:36.770 "data_offset": 2048, 00:29:36.770 "data_size": 63488 00:29:36.770 }, 00:29:36.770 { 00:29:36.770 "name": "pt2", 00:29:36.770 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:36.770 "is_configured": true, 00:29:36.770 "data_offset": 2048, 00:29:36.770 "data_size": 63488 00:29:36.770 }, 00:29:36.770 { 00:29:36.770 "name": "pt3", 00:29:36.770 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:36.770 "is_configured": true, 00:29:36.770 "data_offset": 2048, 00:29:36.770 "data_size": 63488 00:29:36.770 } 00:29:36.770 ] 00:29:36.770 } 00:29:36.770 } 00:29:36.770 }' 00:29:36.770 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:36.770 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:29:36.770 pt2 00:29:36.770 pt3' 00:29:36.770 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:36.770 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:36.770 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:37.029 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:29:37.029 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:37.029 13:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.029 13:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.029 13:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.029 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:37.029 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:37.029 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:37.029 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:37.029 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:29:37.029 13:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.029 13:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.029 13:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.029 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:37.029 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:37.029 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:37.029 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:29:37.029 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:37.029 13:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:29:37.030 [2024-11-20 13:49:39.865754] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c0778faa-4b68-493a-b760-c8ae47fbd080 '!=' c0778faa-4b68-493a-b760-c8ae47fbd080 ']' 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.030 [2024-11-20 13:49:39.917602] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.030 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.288 13:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.288 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:37.288 "name": "raid_bdev1", 00:29:37.288 "uuid": "c0778faa-4b68-493a-b760-c8ae47fbd080", 00:29:37.288 "strip_size_kb": 64, 00:29:37.288 "state": "online", 00:29:37.288 "raid_level": "raid5f", 00:29:37.288 "superblock": true, 00:29:37.288 "num_base_bdevs": 3, 00:29:37.288 "num_base_bdevs_discovered": 2, 00:29:37.288 "num_base_bdevs_operational": 2, 00:29:37.288 "base_bdevs_list": [ 00:29:37.288 { 00:29:37.288 "name": null, 00:29:37.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:37.288 "is_configured": false, 00:29:37.288 "data_offset": 0, 00:29:37.288 "data_size": 63488 00:29:37.288 }, 00:29:37.288 { 00:29:37.288 "name": "pt2", 00:29:37.288 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:37.288 "is_configured": true, 00:29:37.288 "data_offset": 2048, 00:29:37.288 "data_size": 63488 00:29:37.288 }, 00:29:37.288 { 00:29:37.288 "name": "pt3", 00:29:37.288 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:37.288 "is_configured": true, 00:29:37.288 "data_offset": 2048, 00:29:37.288 "data_size": 63488 00:29:37.288 } 00:29:37.288 ] 00:29:37.288 }' 00:29:37.288 13:49:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:37.288 13:49:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.547 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:37.547 13:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.547 13:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.547 [2024-11-20 13:49:40.453779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:37.547 [2024-11-20 13:49:40.453829] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:37.547 [2024-11-20 13:49:40.453990] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:37.547 [2024-11-20 13:49:40.454085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:37.547 [2024-11-20 13:49:40.454122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:29:37.547 13:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.806 [2024-11-20 13:49:40.541740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:37.806 [2024-11-20 13:49:40.541830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:37.806 [2024-11-20 13:49:40.541859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:29:37.806 [2024-11-20 13:49:40.541878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:37.806 [2024-11-20 13:49:40.545563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:37.806 [2024-11-20 13:49:40.545627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:37.806 [2024-11-20 13:49:40.545751] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:37.806 [2024-11-20 13:49:40.545822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:37.806 pt2 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:37.806 "name": "raid_bdev1", 00:29:37.806 "uuid": "c0778faa-4b68-493a-b760-c8ae47fbd080", 00:29:37.806 "strip_size_kb": 64, 00:29:37.806 "state": "configuring", 00:29:37.806 "raid_level": "raid5f", 00:29:37.806 "superblock": true, 00:29:37.806 "num_base_bdevs": 3, 00:29:37.806 "num_base_bdevs_discovered": 1, 00:29:37.806 "num_base_bdevs_operational": 2, 00:29:37.806 "base_bdevs_list": [ 00:29:37.806 { 00:29:37.806 "name": null, 00:29:37.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:37.806 "is_configured": false, 00:29:37.806 "data_offset": 2048, 00:29:37.806 "data_size": 63488 00:29:37.806 }, 00:29:37.806 { 00:29:37.806 "name": "pt2", 00:29:37.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:37.806 "is_configured": true, 00:29:37.806 "data_offset": 2048, 00:29:37.806 "data_size": 63488 00:29:37.806 }, 00:29:37.806 { 00:29:37.806 "name": null, 00:29:37.806 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:37.806 "is_configured": false, 00:29:37.806 "data_offset": 2048, 00:29:37.806 "data_size": 63488 00:29:37.806 } 00:29:37.806 ] 00:29:37.806 }' 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:37.806 13:49:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:38.374 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:29:38.374 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:29:38.374 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:29:38.374 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:38.374 13:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.374 13:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:38.374 [2024-11-20 13:49:41.074132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:38.374 [2024-11-20 13:49:41.074243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:38.374 [2024-11-20 13:49:41.074284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:29:38.374 [2024-11-20 13:49:41.074306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:38.374 [2024-11-20 13:49:41.075039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:38.374 [2024-11-20 13:49:41.075078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:38.374 [2024-11-20 13:49:41.075228] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:29:38.374 [2024-11-20 13:49:41.075275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:38.374 [2024-11-20 13:49:41.075439] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:29:38.374 [2024-11-20 13:49:41.075461] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:38.374 [2024-11-20 13:49:41.075864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:29:38.374 [2024-11-20 13:49:41.081132] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:29:38.374 [2024-11-20 13:49:41.081160] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:29:38.374 [2024-11-20 13:49:41.081531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:38.374 pt3 00:29:38.374 13:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.374 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:29:38.374 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:38.374 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:38.374 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:38.374 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:38.374 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:38.374 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:38.374 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:38.374 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:38.374 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:38.374 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:38.374 13:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.374 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:38.374 13:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:38.374 13:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.374 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:38.374 "name": "raid_bdev1", 00:29:38.374 "uuid": "c0778faa-4b68-493a-b760-c8ae47fbd080", 00:29:38.374 "strip_size_kb": 64, 00:29:38.374 "state": "online", 00:29:38.374 "raid_level": "raid5f", 00:29:38.374 "superblock": true, 00:29:38.374 "num_base_bdevs": 3, 00:29:38.374 "num_base_bdevs_discovered": 2, 00:29:38.374 "num_base_bdevs_operational": 2, 00:29:38.374 "base_bdevs_list": [ 00:29:38.374 { 00:29:38.374 "name": null, 00:29:38.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:38.375 "is_configured": false, 00:29:38.375 "data_offset": 2048, 00:29:38.375 "data_size": 63488 00:29:38.375 }, 00:29:38.375 { 00:29:38.375 "name": "pt2", 00:29:38.375 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:38.375 "is_configured": true, 00:29:38.375 "data_offset": 2048, 00:29:38.375 "data_size": 63488 00:29:38.375 }, 00:29:38.375 { 00:29:38.375 "name": "pt3", 00:29:38.375 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:38.375 "is_configured": true, 00:29:38.375 "data_offset": 2048, 00:29:38.375 "data_size": 63488 00:29:38.375 } 00:29:38.375 ] 00:29:38.375 }' 00:29:38.375 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:38.375 13:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:38.943 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:38.943 13:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.943 13:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:38.943 [2024-11-20 13:49:41.607972] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:38.943 [2024-11-20 13:49:41.608021] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:38.943 [2024-11-20 13:49:41.608146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:38.943 [2024-11-20 13:49:41.608246] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:38.943 [2024-11-20 13:49:41.608264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:29:38.943 13:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.943 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:38.943 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:29:38.943 13:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.943 13:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:38.943 13:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.943 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:29:38.943 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:29:38.943 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:29:38.943 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:29:38.943 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:29:38.943 13:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.943 13:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:38.943 13:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.943 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:38.943 13:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.943 13:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:38.943 [2024-11-20 13:49:41.679973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:38.943 [2024-11-20 13:49:41.680049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:38.943 [2024-11-20 13:49:41.680084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:29:38.943 [2024-11-20 13:49:41.680101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:38.943 [2024-11-20 13:49:41.683700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:38.943 [2024-11-20 13:49:41.683867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:38.943 [2024-11-20 13:49:41.684120] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:38.943 [2024-11-20 13:49:41.684345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:38.943 [2024-11-20 13:49:41.684780] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:29:38.943 [2024-11-20 13:49:41.685006] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:38.943 pt1 00:29:38.943 [2024-11-20 13:49:41.685168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:29:38.944 [2024-11-20 13:49:41.685368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:38.944 13:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.944 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:29:38.944 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:29:38.944 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:38.944 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:38.944 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:38.944 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:38.944 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:38.944 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:38.944 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:38.944 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:38.944 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:38.944 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:38.944 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:38.944 13:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.944 13:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:38.944 13:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.944 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:38.944 "name": "raid_bdev1", 00:29:38.944 "uuid": "c0778faa-4b68-493a-b760-c8ae47fbd080", 00:29:38.944 "strip_size_kb": 64, 00:29:38.944 "state": "configuring", 00:29:38.944 "raid_level": "raid5f", 00:29:38.944 "superblock": true, 00:29:38.944 "num_base_bdevs": 3, 00:29:38.944 "num_base_bdevs_discovered": 1, 00:29:38.944 "num_base_bdevs_operational": 2, 00:29:38.944 "base_bdevs_list": [ 00:29:38.944 { 00:29:38.944 "name": null, 00:29:38.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:38.944 "is_configured": false, 00:29:38.944 "data_offset": 2048, 00:29:38.944 "data_size": 63488 00:29:38.944 }, 00:29:38.944 { 00:29:38.944 "name": "pt2", 00:29:38.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:38.944 "is_configured": true, 00:29:38.944 "data_offset": 2048, 00:29:38.944 "data_size": 63488 00:29:38.944 }, 00:29:38.944 { 00:29:38.944 "name": null, 00:29:38.944 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:38.944 "is_configured": false, 00:29:38.944 "data_offset": 2048, 00:29:38.944 "data_size": 63488 00:29:38.944 } 00:29:38.944 ] 00:29:38.944 }' 00:29:38.944 13:49:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:38.944 13:49:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.512 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:39.512 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:29:39.512 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.512 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.512 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.512 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:29:39.512 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:39.512 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.512 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.512 [2024-11-20 13:49:42.268389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:39.512 [2024-11-20 13:49:42.268661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:39.512 [2024-11-20 13:49:42.268711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:29:39.512 [2024-11-20 13:49:42.268731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:39.512 [2024-11-20 13:49:42.269482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:39.512 [2024-11-20 13:49:42.269526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:39.512 [2024-11-20 13:49:42.269659] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:29:39.512 [2024-11-20 13:49:42.269706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:39.512 [2024-11-20 13:49:42.269880] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:29:39.512 [2024-11-20 13:49:42.269919] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:39.512 [2024-11-20 13:49:42.270266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:29:39.512 [2024-11-20 13:49:42.275644] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:29:39.512 [2024-11-20 13:49:42.275692] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:29:39.512 pt3 00:29:39.512 [2024-11-20 13:49:42.276040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:39.512 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.512 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:29:39.512 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:39.512 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:39.512 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:39.512 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:39.512 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:39.512 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:39.512 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:39.512 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:39.513 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:39.513 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:39.513 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:39.513 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.513 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.513 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.513 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:39.513 "name": "raid_bdev1", 00:29:39.513 "uuid": "c0778faa-4b68-493a-b760-c8ae47fbd080", 00:29:39.513 "strip_size_kb": 64, 00:29:39.513 "state": "online", 00:29:39.513 "raid_level": "raid5f", 00:29:39.513 "superblock": true, 00:29:39.513 "num_base_bdevs": 3, 00:29:39.513 "num_base_bdevs_discovered": 2, 00:29:39.513 "num_base_bdevs_operational": 2, 00:29:39.513 "base_bdevs_list": [ 00:29:39.513 { 00:29:39.513 "name": null, 00:29:39.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:39.513 "is_configured": false, 00:29:39.513 "data_offset": 2048, 00:29:39.513 "data_size": 63488 00:29:39.513 }, 00:29:39.513 { 00:29:39.513 "name": "pt2", 00:29:39.513 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:39.513 "is_configured": true, 00:29:39.513 "data_offset": 2048, 00:29:39.513 "data_size": 63488 00:29:39.513 }, 00:29:39.513 { 00:29:39.513 "name": "pt3", 00:29:39.513 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:39.513 "is_configured": true, 00:29:39.513 "data_offset": 2048, 00:29:39.513 "data_size": 63488 00:29:39.513 } 00:29:39.513 ] 00:29:39.513 }' 00:29:39.513 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:39.513 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.080 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:29:40.080 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.080 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.080 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:40.080 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.080 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:29:40.081 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:29:40.081 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:40.081 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.081 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.081 [2024-11-20 13:49:42.886794] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:40.081 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.081 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c0778faa-4b68-493a-b760-c8ae47fbd080 '!=' c0778faa-4b68-493a-b760-c8ae47fbd080 ']' 00:29:40.081 13:49:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81720 00:29:40.081 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81720 ']' 00:29:40.081 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81720 00:29:40.081 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:29:40.081 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:40.081 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81720 00:29:40.081 killing process with pid 81720 00:29:40.081 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:40.081 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:40.081 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81720' 00:29:40.081 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81720 00:29:40.081 [2024-11-20 13:49:42.967838] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:40.081 [2024-11-20 13:49:42.968012] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:40.081 [2024-11-20 13:49:42.968125] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:40.081 [2024-11-20 13:49:42.968158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:29:40.081 13:49:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81720 00:29:40.648 [2024-11-20 13:49:43.274849] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:42.025 13:49:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:29:42.025 00:29:42.025 real 0m9.260s 00:29:42.025 user 0m14.864s 00:29:42.025 sys 0m1.466s 00:29:42.025 13:49:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:42.025 ************************************ 00:29:42.025 END TEST raid5f_superblock_test 00:29:42.025 ************************************ 00:29:42.025 13:49:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.025 13:49:44 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:29:42.025 13:49:44 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:29:42.025 13:49:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:29:42.026 13:49:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:42.026 13:49:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:42.026 ************************************ 00:29:42.026 START TEST raid5f_rebuild_test 00:29:42.026 ************************************ 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:29:42.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82180 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82180 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 82180 ']' 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.026 13:49:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.026 [2024-11-20 13:49:44.711411] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:29:42.026 [2024-11-20 13:49:44.711920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82180 ] 00:29:42.026 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:42.026 Zero copy mechanism will not be used. 00:29:42.026 [2024-11-20 13:49:44.907603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.285 [2024-11-20 13:49:45.081219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.543 [2024-11-20 13:49:45.329843] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:42.543 [2024-11-20 13:49:45.329959] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.111 BaseBdev1_malloc 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.111 [2024-11-20 13:49:45.858871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:43.111 [2024-11-20 13:49:45.859025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:43.111 [2024-11-20 13:49:45.859062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:43.111 [2024-11-20 13:49:45.859081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:43.111 [2024-11-20 13:49:45.862328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:43.111 [2024-11-20 13:49:45.862395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:43.111 BaseBdev1 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.111 BaseBdev2_malloc 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.111 [2024-11-20 13:49:45.918034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:43.111 [2024-11-20 13:49:45.918113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:43.111 [2024-11-20 13:49:45.918165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:43.111 [2024-11-20 13:49:45.918185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:43.111 [2024-11-20 13:49:45.921474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:43.111 [2024-11-20 13:49:45.921524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:43.111 BaseBdev2 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.111 BaseBdev3_malloc 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.111 [2024-11-20 13:49:45.993428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:43.111 [2024-11-20 13:49:45.993507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:43.111 [2024-11-20 13:49:45.993558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:29:43.111 [2024-11-20 13:49:45.993578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:43.111 [2024-11-20 13:49:45.996686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:43.111 [2024-11-20 13:49:45.996906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:43.111 BaseBdev3 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.111 13:49:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.370 spare_malloc 00:29:43.370 13:49:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.370 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:43.370 13:49:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.370 13:49:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.370 spare_delay 00:29:43.370 13:49:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.370 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:29:43.370 13:49:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.370 13:49:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.370 [2024-11-20 13:49:46.068281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:43.370 [2024-11-20 13:49:46.068548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:43.370 [2024-11-20 13:49:46.068596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:29:43.370 [2024-11-20 13:49:46.068618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:43.370 [2024-11-20 13:49:46.072036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:43.370 [2024-11-20 13:49:46.072091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:43.370 spare 00:29:43.370 13:49:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.370 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:29:43.370 13:49:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.370 13:49:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.370 [2024-11-20 13:49:46.080391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:43.370 [2024-11-20 13:49:46.083158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:43.370 [2024-11-20 13:49:46.083258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:43.370 [2024-11-20 13:49:46.083404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:43.370 [2024-11-20 13:49:46.083423] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:29:43.370 [2024-11-20 13:49:46.083903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:43.370 [2024-11-20 13:49:46.089401] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:43.370 [2024-11-20 13:49:46.089435] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:43.370 [2024-11-20 13:49:46.089712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:43.370 13:49:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.370 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:43.370 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:43.370 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:43.370 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:43.370 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:43.371 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:43.371 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:43.371 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:43.371 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:43.371 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:43.371 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:43.371 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:43.371 13:49:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.371 13:49:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.371 13:49:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.371 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:43.371 "name": "raid_bdev1", 00:29:43.371 "uuid": "8fa316ae-62fe-4918-8d9f-9413bf642f43", 00:29:43.371 "strip_size_kb": 64, 00:29:43.371 "state": "online", 00:29:43.371 "raid_level": "raid5f", 00:29:43.371 "superblock": false, 00:29:43.371 "num_base_bdevs": 3, 00:29:43.371 "num_base_bdevs_discovered": 3, 00:29:43.371 "num_base_bdevs_operational": 3, 00:29:43.371 "base_bdevs_list": [ 00:29:43.371 { 00:29:43.371 "name": "BaseBdev1", 00:29:43.371 "uuid": "8b552efe-1795-5daf-b3a1-c3cce55c2d05", 00:29:43.371 "is_configured": true, 00:29:43.371 "data_offset": 0, 00:29:43.371 "data_size": 65536 00:29:43.371 }, 00:29:43.371 { 00:29:43.371 "name": "BaseBdev2", 00:29:43.371 "uuid": "66acabeb-403e-585b-9a70-4061d61421ec", 00:29:43.371 "is_configured": true, 00:29:43.371 "data_offset": 0, 00:29:43.371 "data_size": 65536 00:29:43.371 }, 00:29:43.371 { 00:29:43.371 "name": "BaseBdev3", 00:29:43.371 "uuid": "a95a11bc-fcac-5c5e-8591-431626565b78", 00:29:43.371 "is_configured": true, 00:29:43.371 "data_offset": 0, 00:29:43.371 "data_size": 65536 00:29:43.371 } 00:29:43.371 ] 00:29:43.371 }' 00:29:43.371 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:43.371 13:49:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.938 [2024-11-20 13:49:46.644815] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:43.938 13:49:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:44.197 [2024-11-20 13:49:47.076813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:29:44.197 /dev/nbd0 00:29:44.455 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:44.455 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:44.455 13:49:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:44.455 13:49:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:29:44.456 13:49:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:44.456 13:49:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:44.456 13:49:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:44.456 13:49:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:29:44.456 13:49:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:44.456 13:49:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:44.456 13:49:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:44.456 1+0 records in 00:29:44.456 1+0 records out 00:29:44.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026574 s, 15.4 MB/s 00:29:44.456 13:49:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:44.456 13:49:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:29:44.456 13:49:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:44.456 13:49:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:44.456 13:49:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:29:44.456 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:44.456 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:44.456 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:29:44.456 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:29:44.456 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:29:44.456 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:29:44.714 512+0 records in 00:29:44.714 512+0 records out 00:29:44.714 67108864 bytes (67 MB, 64 MiB) copied, 0.46371 s, 145 MB/s 00:29:44.714 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:29:44.714 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:44.714 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:44.714 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:44.714 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:44.714 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:44.714 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:29:44.974 [2024-11-20 13:49:47.855081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:44.974 [2024-11-20 13:49:47.874322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.974 13:49:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.233 13:49:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.233 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:45.233 "name": "raid_bdev1", 00:29:45.233 "uuid": "8fa316ae-62fe-4918-8d9f-9413bf642f43", 00:29:45.233 "strip_size_kb": 64, 00:29:45.233 "state": "online", 00:29:45.233 "raid_level": "raid5f", 00:29:45.233 "superblock": false, 00:29:45.233 "num_base_bdevs": 3, 00:29:45.233 "num_base_bdevs_discovered": 2, 00:29:45.233 "num_base_bdevs_operational": 2, 00:29:45.233 "base_bdevs_list": [ 00:29:45.233 { 00:29:45.233 "name": null, 00:29:45.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:45.233 "is_configured": false, 00:29:45.233 "data_offset": 0, 00:29:45.233 "data_size": 65536 00:29:45.233 }, 00:29:45.233 { 00:29:45.233 "name": "BaseBdev2", 00:29:45.233 "uuid": "66acabeb-403e-585b-9a70-4061d61421ec", 00:29:45.233 "is_configured": true, 00:29:45.233 "data_offset": 0, 00:29:45.233 "data_size": 65536 00:29:45.233 }, 00:29:45.233 { 00:29:45.233 "name": "BaseBdev3", 00:29:45.233 "uuid": "a95a11bc-fcac-5c5e-8591-431626565b78", 00:29:45.233 "is_configured": true, 00:29:45.233 "data_offset": 0, 00:29:45.233 "data_size": 65536 00:29:45.233 } 00:29:45.233 ] 00:29:45.233 }' 00:29:45.233 13:49:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:45.233 13:49:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.801 13:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:45.801 13:49:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.801 13:49:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.801 [2024-11-20 13:49:48.430588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:45.801 [2024-11-20 13:49:48.448198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:29:45.801 13:49:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.801 13:49:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:29:45.801 [2024-11-20 13:49:48.456655] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:46.740 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:46.740 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:46.740 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:46.740 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:46.740 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:46.740 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:46.740 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.740 13:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.740 13:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:46.740 13:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.740 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:46.740 "name": "raid_bdev1", 00:29:46.740 "uuid": "8fa316ae-62fe-4918-8d9f-9413bf642f43", 00:29:46.740 "strip_size_kb": 64, 00:29:46.740 "state": "online", 00:29:46.740 "raid_level": "raid5f", 00:29:46.740 "superblock": false, 00:29:46.740 "num_base_bdevs": 3, 00:29:46.740 "num_base_bdevs_discovered": 3, 00:29:46.740 "num_base_bdevs_operational": 3, 00:29:46.740 "process": { 00:29:46.740 "type": "rebuild", 00:29:46.740 "target": "spare", 00:29:46.740 "progress": { 00:29:46.740 "blocks": 18432, 00:29:46.740 "percent": 14 00:29:46.740 } 00:29:46.740 }, 00:29:46.740 "base_bdevs_list": [ 00:29:46.740 { 00:29:46.740 "name": "spare", 00:29:46.740 "uuid": "681e912f-fa5d-5931-ac27-d7ce44269256", 00:29:46.740 "is_configured": true, 00:29:46.740 "data_offset": 0, 00:29:46.740 "data_size": 65536 00:29:46.740 }, 00:29:46.740 { 00:29:46.740 "name": "BaseBdev2", 00:29:46.740 "uuid": "66acabeb-403e-585b-9a70-4061d61421ec", 00:29:46.740 "is_configured": true, 00:29:46.740 "data_offset": 0, 00:29:46.740 "data_size": 65536 00:29:46.740 }, 00:29:46.740 { 00:29:46.740 "name": "BaseBdev3", 00:29:46.740 "uuid": "a95a11bc-fcac-5c5e-8591-431626565b78", 00:29:46.740 "is_configured": true, 00:29:46.740 "data_offset": 0, 00:29:46.740 "data_size": 65536 00:29:46.740 } 00:29:46.740 ] 00:29:46.740 }' 00:29:46.740 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:46.740 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:46.740 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:46.740 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:46.740 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:29:46.740 13:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.740 13:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:46.740 [2024-11-20 13:49:49.619185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:46.999 [2024-11-20 13:49:49.675577] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:46.999 [2024-11-20 13:49:49.675750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:46.999 [2024-11-20 13:49:49.675783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:46.999 [2024-11-20 13:49:49.675795] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:46.999 13:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.999 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:29:46.999 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:46.999 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:46.999 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:46.999 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:46.999 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:46.999 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:46.999 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:46.999 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:46.999 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:46.999 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:46.999 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.999 13:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.999 13:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:46.999 13:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.999 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:46.999 "name": "raid_bdev1", 00:29:46.999 "uuid": "8fa316ae-62fe-4918-8d9f-9413bf642f43", 00:29:46.999 "strip_size_kb": 64, 00:29:46.999 "state": "online", 00:29:46.999 "raid_level": "raid5f", 00:29:46.999 "superblock": false, 00:29:46.999 "num_base_bdevs": 3, 00:29:46.999 "num_base_bdevs_discovered": 2, 00:29:46.999 "num_base_bdevs_operational": 2, 00:29:46.999 "base_bdevs_list": [ 00:29:46.999 { 00:29:46.999 "name": null, 00:29:46.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:46.999 "is_configured": false, 00:29:46.999 "data_offset": 0, 00:29:46.999 "data_size": 65536 00:29:46.999 }, 00:29:46.999 { 00:29:46.999 "name": "BaseBdev2", 00:29:46.999 "uuid": "66acabeb-403e-585b-9a70-4061d61421ec", 00:29:46.999 "is_configured": true, 00:29:46.999 "data_offset": 0, 00:29:46.999 "data_size": 65536 00:29:46.999 }, 00:29:46.999 { 00:29:46.999 "name": "BaseBdev3", 00:29:46.999 "uuid": "a95a11bc-fcac-5c5e-8591-431626565b78", 00:29:46.999 "is_configured": true, 00:29:47.000 "data_offset": 0, 00:29:47.000 "data_size": 65536 00:29:47.000 } 00:29:47.000 ] 00:29:47.000 }' 00:29:47.000 13:49:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:47.000 13:49:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.567 13:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:47.567 13:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:47.567 13:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:47.567 13:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:47.567 13:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:47.567 13:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:47.567 13:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.567 13:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.567 13:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.567 13:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.567 13:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:47.567 "name": "raid_bdev1", 00:29:47.567 "uuid": "8fa316ae-62fe-4918-8d9f-9413bf642f43", 00:29:47.567 "strip_size_kb": 64, 00:29:47.567 "state": "online", 00:29:47.567 "raid_level": "raid5f", 00:29:47.567 "superblock": false, 00:29:47.567 "num_base_bdevs": 3, 00:29:47.567 "num_base_bdevs_discovered": 2, 00:29:47.567 "num_base_bdevs_operational": 2, 00:29:47.567 "base_bdevs_list": [ 00:29:47.567 { 00:29:47.567 "name": null, 00:29:47.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:47.567 "is_configured": false, 00:29:47.567 "data_offset": 0, 00:29:47.567 "data_size": 65536 00:29:47.567 }, 00:29:47.567 { 00:29:47.567 "name": "BaseBdev2", 00:29:47.567 "uuid": "66acabeb-403e-585b-9a70-4061d61421ec", 00:29:47.567 "is_configured": true, 00:29:47.567 "data_offset": 0, 00:29:47.567 "data_size": 65536 00:29:47.567 }, 00:29:47.567 { 00:29:47.567 "name": "BaseBdev3", 00:29:47.567 "uuid": "a95a11bc-fcac-5c5e-8591-431626565b78", 00:29:47.567 "is_configured": true, 00:29:47.567 "data_offset": 0, 00:29:47.567 "data_size": 65536 00:29:47.567 } 00:29:47.567 ] 00:29:47.567 }' 00:29:47.567 13:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:47.567 13:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:47.567 13:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:47.567 13:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:47.567 13:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:29:47.567 13:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.567 13:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.567 [2024-11-20 13:49:50.413605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:47.567 [2024-11-20 13:49:50.430166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:29:47.567 13:49:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.567 13:49:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:29:47.567 [2024-11-20 13:49:50.438241] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:48.945 "name": "raid_bdev1", 00:29:48.945 "uuid": "8fa316ae-62fe-4918-8d9f-9413bf642f43", 00:29:48.945 "strip_size_kb": 64, 00:29:48.945 "state": "online", 00:29:48.945 "raid_level": "raid5f", 00:29:48.945 "superblock": false, 00:29:48.945 "num_base_bdevs": 3, 00:29:48.945 "num_base_bdevs_discovered": 3, 00:29:48.945 "num_base_bdevs_operational": 3, 00:29:48.945 "process": { 00:29:48.945 "type": "rebuild", 00:29:48.945 "target": "spare", 00:29:48.945 "progress": { 00:29:48.945 "blocks": 18432, 00:29:48.945 "percent": 14 00:29:48.945 } 00:29:48.945 }, 00:29:48.945 "base_bdevs_list": [ 00:29:48.945 { 00:29:48.945 "name": "spare", 00:29:48.945 "uuid": "681e912f-fa5d-5931-ac27-d7ce44269256", 00:29:48.945 "is_configured": true, 00:29:48.945 "data_offset": 0, 00:29:48.945 "data_size": 65536 00:29:48.945 }, 00:29:48.945 { 00:29:48.945 "name": "BaseBdev2", 00:29:48.945 "uuid": "66acabeb-403e-585b-9a70-4061d61421ec", 00:29:48.945 "is_configured": true, 00:29:48.945 "data_offset": 0, 00:29:48.945 "data_size": 65536 00:29:48.945 }, 00:29:48.945 { 00:29:48.945 "name": "BaseBdev3", 00:29:48.945 "uuid": "a95a11bc-fcac-5c5e-8591-431626565b78", 00:29:48.945 "is_configured": true, 00:29:48.945 "data_offset": 0, 00:29:48.945 "data_size": 65536 00:29:48.945 } 00:29:48.945 ] 00:29:48.945 }' 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=602 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.945 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:48.945 "name": "raid_bdev1", 00:29:48.945 "uuid": "8fa316ae-62fe-4918-8d9f-9413bf642f43", 00:29:48.945 "strip_size_kb": 64, 00:29:48.945 "state": "online", 00:29:48.945 "raid_level": "raid5f", 00:29:48.945 "superblock": false, 00:29:48.945 "num_base_bdevs": 3, 00:29:48.945 "num_base_bdevs_discovered": 3, 00:29:48.945 "num_base_bdevs_operational": 3, 00:29:48.945 "process": { 00:29:48.945 "type": "rebuild", 00:29:48.945 "target": "spare", 00:29:48.945 "progress": { 00:29:48.945 "blocks": 22528, 00:29:48.945 "percent": 17 00:29:48.945 } 00:29:48.945 }, 00:29:48.945 "base_bdevs_list": [ 00:29:48.945 { 00:29:48.945 "name": "spare", 00:29:48.945 "uuid": "681e912f-fa5d-5931-ac27-d7ce44269256", 00:29:48.945 "is_configured": true, 00:29:48.945 "data_offset": 0, 00:29:48.945 "data_size": 65536 00:29:48.945 }, 00:29:48.945 { 00:29:48.945 "name": "BaseBdev2", 00:29:48.945 "uuid": "66acabeb-403e-585b-9a70-4061d61421ec", 00:29:48.945 "is_configured": true, 00:29:48.945 "data_offset": 0, 00:29:48.945 "data_size": 65536 00:29:48.945 }, 00:29:48.945 { 00:29:48.945 "name": "BaseBdev3", 00:29:48.945 "uuid": "a95a11bc-fcac-5c5e-8591-431626565b78", 00:29:48.945 "is_configured": true, 00:29:48.945 "data_offset": 0, 00:29:48.945 "data_size": 65536 00:29:48.945 } 00:29:48.945 ] 00:29:48.945 }' 00:29:48.946 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:48.946 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:48.946 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:48.946 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:48.946 13:49:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:49.881 13:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:49.882 13:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:49.882 13:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:49.882 13:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:49.882 13:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:49.882 13:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:49.882 13:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:49.882 13:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:49.882 13:49:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.882 13:49:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.141 13:49:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.141 13:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:50.141 "name": "raid_bdev1", 00:29:50.141 "uuid": "8fa316ae-62fe-4918-8d9f-9413bf642f43", 00:29:50.141 "strip_size_kb": 64, 00:29:50.141 "state": "online", 00:29:50.141 "raid_level": "raid5f", 00:29:50.141 "superblock": false, 00:29:50.141 "num_base_bdevs": 3, 00:29:50.141 "num_base_bdevs_discovered": 3, 00:29:50.141 "num_base_bdevs_operational": 3, 00:29:50.141 "process": { 00:29:50.141 "type": "rebuild", 00:29:50.141 "target": "spare", 00:29:50.141 "progress": { 00:29:50.141 "blocks": 47104, 00:29:50.141 "percent": 35 00:29:50.141 } 00:29:50.141 }, 00:29:50.141 "base_bdevs_list": [ 00:29:50.141 { 00:29:50.141 "name": "spare", 00:29:50.141 "uuid": "681e912f-fa5d-5931-ac27-d7ce44269256", 00:29:50.141 "is_configured": true, 00:29:50.141 "data_offset": 0, 00:29:50.141 "data_size": 65536 00:29:50.141 }, 00:29:50.141 { 00:29:50.141 "name": "BaseBdev2", 00:29:50.141 "uuid": "66acabeb-403e-585b-9a70-4061d61421ec", 00:29:50.141 "is_configured": true, 00:29:50.141 "data_offset": 0, 00:29:50.141 "data_size": 65536 00:29:50.141 }, 00:29:50.141 { 00:29:50.141 "name": "BaseBdev3", 00:29:50.141 "uuid": "a95a11bc-fcac-5c5e-8591-431626565b78", 00:29:50.141 "is_configured": true, 00:29:50.141 "data_offset": 0, 00:29:50.141 "data_size": 65536 00:29:50.141 } 00:29:50.141 ] 00:29:50.141 }' 00:29:50.141 13:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:50.141 13:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:50.141 13:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:50.141 13:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:50.141 13:49:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:51.077 13:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:51.077 13:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:51.077 13:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:51.077 13:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:51.077 13:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:51.077 13:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:51.077 13:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:51.077 13:49:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.077 13:49:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:51.077 13:49:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:51.077 13:49:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.337 13:49:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:51.337 "name": "raid_bdev1", 00:29:51.337 "uuid": "8fa316ae-62fe-4918-8d9f-9413bf642f43", 00:29:51.337 "strip_size_kb": 64, 00:29:51.337 "state": "online", 00:29:51.337 "raid_level": "raid5f", 00:29:51.337 "superblock": false, 00:29:51.337 "num_base_bdevs": 3, 00:29:51.337 "num_base_bdevs_discovered": 3, 00:29:51.337 "num_base_bdevs_operational": 3, 00:29:51.337 "process": { 00:29:51.337 "type": "rebuild", 00:29:51.337 "target": "spare", 00:29:51.337 "progress": { 00:29:51.337 "blocks": 69632, 00:29:51.337 "percent": 53 00:29:51.337 } 00:29:51.337 }, 00:29:51.337 "base_bdevs_list": [ 00:29:51.337 { 00:29:51.337 "name": "spare", 00:29:51.337 "uuid": "681e912f-fa5d-5931-ac27-d7ce44269256", 00:29:51.337 "is_configured": true, 00:29:51.337 "data_offset": 0, 00:29:51.337 "data_size": 65536 00:29:51.337 }, 00:29:51.337 { 00:29:51.337 "name": "BaseBdev2", 00:29:51.337 "uuid": "66acabeb-403e-585b-9a70-4061d61421ec", 00:29:51.337 "is_configured": true, 00:29:51.337 "data_offset": 0, 00:29:51.337 "data_size": 65536 00:29:51.337 }, 00:29:51.337 { 00:29:51.337 "name": "BaseBdev3", 00:29:51.337 "uuid": "a95a11bc-fcac-5c5e-8591-431626565b78", 00:29:51.337 "is_configured": true, 00:29:51.337 "data_offset": 0, 00:29:51.337 "data_size": 65536 00:29:51.337 } 00:29:51.337 ] 00:29:51.337 }' 00:29:51.337 13:49:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:51.337 13:49:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:51.337 13:49:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:51.337 13:49:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:51.337 13:49:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:52.272 13:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:52.272 13:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:52.272 13:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:52.272 13:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:52.272 13:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:52.272 13:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:52.272 13:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:52.272 13:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:52.272 13:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.272 13:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.272 13:49:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.532 13:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:52.532 "name": "raid_bdev1", 00:29:52.532 "uuid": "8fa316ae-62fe-4918-8d9f-9413bf642f43", 00:29:52.532 "strip_size_kb": 64, 00:29:52.532 "state": "online", 00:29:52.532 "raid_level": "raid5f", 00:29:52.532 "superblock": false, 00:29:52.532 "num_base_bdevs": 3, 00:29:52.532 "num_base_bdevs_discovered": 3, 00:29:52.532 "num_base_bdevs_operational": 3, 00:29:52.532 "process": { 00:29:52.532 "type": "rebuild", 00:29:52.532 "target": "spare", 00:29:52.532 "progress": { 00:29:52.532 "blocks": 94208, 00:29:52.532 "percent": 71 00:29:52.532 } 00:29:52.532 }, 00:29:52.532 "base_bdevs_list": [ 00:29:52.532 { 00:29:52.532 "name": "spare", 00:29:52.532 "uuid": "681e912f-fa5d-5931-ac27-d7ce44269256", 00:29:52.532 "is_configured": true, 00:29:52.532 "data_offset": 0, 00:29:52.532 "data_size": 65536 00:29:52.532 }, 00:29:52.532 { 00:29:52.532 "name": "BaseBdev2", 00:29:52.532 "uuid": "66acabeb-403e-585b-9a70-4061d61421ec", 00:29:52.532 "is_configured": true, 00:29:52.532 "data_offset": 0, 00:29:52.532 "data_size": 65536 00:29:52.532 }, 00:29:52.532 { 00:29:52.532 "name": "BaseBdev3", 00:29:52.532 "uuid": "a95a11bc-fcac-5c5e-8591-431626565b78", 00:29:52.532 "is_configured": true, 00:29:52.532 "data_offset": 0, 00:29:52.532 "data_size": 65536 00:29:52.532 } 00:29:52.532 ] 00:29:52.532 }' 00:29:52.532 13:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:52.532 13:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:52.532 13:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:52.532 13:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:52.532 13:49:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:53.468 13:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:53.468 13:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:53.468 13:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:53.468 13:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:53.468 13:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:53.468 13:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:53.468 13:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:53.468 13:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:53.468 13:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.468 13:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:53.468 13:49:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.468 13:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:53.468 "name": "raid_bdev1", 00:29:53.468 "uuid": "8fa316ae-62fe-4918-8d9f-9413bf642f43", 00:29:53.468 "strip_size_kb": 64, 00:29:53.468 "state": "online", 00:29:53.468 "raid_level": "raid5f", 00:29:53.468 "superblock": false, 00:29:53.468 "num_base_bdevs": 3, 00:29:53.468 "num_base_bdevs_discovered": 3, 00:29:53.468 "num_base_bdevs_operational": 3, 00:29:53.468 "process": { 00:29:53.468 "type": "rebuild", 00:29:53.468 "target": "spare", 00:29:53.468 "progress": { 00:29:53.468 "blocks": 116736, 00:29:53.468 "percent": 89 00:29:53.468 } 00:29:53.468 }, 00:29:53.468 "base_bdevs_list": [ 00:29:53.468 { 00:29:53.468 "name": "spare", 00:29:53.468 "uuid": "681e912f-fa5d-5931-ac27-d7ce44269256", 00:29:53.468 "is_configured": true, 00:29:53.468 "data_offset": 0, 00:29:53.468 "data_size": 65536 00:29:53.468 }, 00:29:53.468 { 00:29:53.468 "name": "BaseBdev2", 00:29:53.468 "uuid": "66acabeb-403e-585b-9a70-4061d61421ec", 00:29:53.468 "is_configured": true, 00:29:53.468 "data_offset": 0, 00:29:53.468 "data_size": 65536 00:29:53.468 }, 00:29:53.468 { 00:29:53.468 "name": "BaseBdev3", 00:29:53.468 "uuid": "a95a11bc-fcac-5c5e-8591-431626565b78", 00:29:53.468 "is_configured": true, 00:29:53.468 "data_offset": 0, 00:29:53.468 "data_size": 65536 00:29:53.468 } 00:29:53.468 ] 00:29:53.468 }' 00:29:53.468 13:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:53.727 13:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:53.727 13:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:53.727 13:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:29:53.727 13:49:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:29:54.019 [2024-11-20 13:49:56.928144] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:54.019 [2024-11-20 13:49:56.928291] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:54.019 [2024-11-20 13:49:56.928376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:54.586 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:29:54.586 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:54.586 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:54.586 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:29:54.586 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:29:54.586 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:54.586 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:54.586 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:54.586 13:49:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.586 13:49:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.586 13:49:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.846 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:54.846 "name": "raid_bdev1", 00:29:54.846 "uuid": "8fa316ae-62fe-4918-8d9f-9413bf642f43", 00:29:54.846 "strip_size_kb": 64, 00:29:54.846 "state": "online", 00:29:54.846 "raid_level": "raid5f", 00:29:54.846 "superblock": false, 00:29:54.846 "num_base_bdevs": 3, 00:29:54.846 "num_base_bdevs_discovered": 3, 00:29:54.846 "num_base_bdevs_operational": 3, 00:29:54.846 "base_bdevs_list": [ 00:29:54.846 { 00:29:54.846 "name": "spare", 00:29:54.846 "uuid": "681e912f-fa5d-5931-ac27-d7ce44269256", 00:29:54.846 "is_configured": true, 00:29:54.846 "data_offset": 0, 00:29:54.846 "data_size": 65536 00:29:54.846 }, 00:29:54.846 { 00:29:54.846 "name": "BaseBdev2", 00:29:54.846 "uuid": "66acabeb-403e-585b-9a70-4061d61421ec", 00:29:54.846 "is_configured": true, 00:29:54.846 "data_offset": 0, 00:29:54.846 "data_size": 65536 00:29:54.846 }, 00:29:54.846 { 00:29:54.846 "name": "BaseBdev3", 00:29:54.846 "uuid": "a95a11bc-fcac-5c5e-8591-431626565b78", 00:29:54.846 "is_configured": true, 00:29:54.846 "data_offset": 0, 00:29:54.846 "data_size": 65536 00:29:54.846 } 00:29:54.846 ] 00:29:54.846 }' 00:29:54.846 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:54.846 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:54.846 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:54.846 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:29:54.846 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:29:54.846 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:54.846 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:29:54.846 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:29:54.846 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:29:54.846 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:29:54.846 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:54.846 13:49:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.846 13:49:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.846 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:54.846 13:49:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.846 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:29:54.846 "name": "raid_bdev1", 00:29:54.846 "uuid": "8fa316ae-62fe-4918-8d9f-9413bf642f43", 00:29:54.846 "strip_size_kb": 64, 00:29:54.846 "state": "online", 00:29:54.846 "raid_level": "raid5f", 00:29:54.846 "superblock": false, 00:29:54.846 "num_base_bdevs": 3, 00:29:54.846 "num_base_bdevs_discovered": 3, 00:29:54.846 "num_base_bdevs_operational": 3, 00:29:54.846 "base_bdevs_list": [ 00:29:54.846 { 00:29:54.846 "name": "spare", 00:29:54.846 "uuid": "681e912f-fa5d-5931-ac27-d7ce44269256", 00:29:54.846 "is_configured": true, 00:29:54.846 "data_offset": 0, 00:29:54.846 "data_size": 65536 00:29:54.846 }, 00:29:54.846 { 00:29:54.846 "name": "BaseBdev2", 00:29:54.846 "uuid": "66acabeb-403e-585b-9a70-4061d61421ec", 00:29:54.846 "is_configured": true, 00:29:54.846 "data_offset": 0, 00:29:54.846 "data_size": 65536 00:29:54.846 }, 00:29:54.846 { 00:29:54.846 "name": "BaseBdev3", 00:29:54.846 "uuid": "a95a11bc-fcac-5c5e-8591-431626565b78", 00:29:54.846 "is_configured": true, 00:29:54.846 "data_offset": 0, 00:29:54.846 "data_size": 65536 00:29:54.846 } 00:29:54.846 ] 00:29:54.846 }' 00:29:54.846 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:29:54.846 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:29:54.846 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:29:55.105 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:29:55.105 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:29:55.105 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:55.105 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:55.105 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:29:55.105 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:55.105 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:55.105 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:55.105 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:55.105 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:55.105 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:55.105 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:55.105 13:49:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.105 13:49:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.105 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:55.105 13:49:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.105 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:55.105 "name": "raid_bdev1", 00:29:55.105 "uuid": "8fa316ae-62fe-4918-8d9f-9413bf642f43", 00:29:55.105 "strip_size_kb": 64, 00:29:55.105 "state": "online", 00:29:55.105 "raid_level": "raid5f", 00:29:55.105 "superblock": false, 00:29:55.105 "num_base_bdevs": 3, 00:29:55.105 "num_base_bdevs_discovered": 3, 00:29:55.105 "num_base_bdevs_operational": 3, 00:29:55.105 "base_bdevs_list": [ 00:29:55.105 { 00:29:55.105 "name": "spare", 00:29:55.105 "uuid": "681e912f-fa5d-5931-ac27-d7ce44269256", 00:29:55.105 "is_configured": true, 00:29:55.105 "data_offset": 0, 00:29:55.105 "data_size": 65536 00:29:55.105 }, 00:29:55.105 { 00:29:55.105 "name": "BaseBdev2", 00:29:55.105 "uuid": "66acabeb-403e-585b-9a70-4061d61421ec", 00:29:55.105 "is_configured": true, 00:29:55.105 "data_offset": 0, 00:29:55.105 "data_size": 65536 00:29:55.105 }, 00:29:55.105 { 00:29:55.105 "name": "BaseBdev3", 00:29:55.105 "uuid": "a95a11bc-fcac-5c5e-8591-431626565b78", 00:29:55.105 "is_configured": true, 00:29:55.105 "data_offset": 0, 00:29:55.105 "data_size": 65536 00:29:55.105 } 00:29:55.105 ] 00:29:55.105 }' 00:29:55.105 13:49:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:55.105 13:49:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.674 13:49:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:55.674 13:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.674 13:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.674 [2024-11-20 13:49:58.342772] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:55.674 [2024-11-20 13:49:58.342816] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:55.674 [2024-11-20 13:49:58.342961] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:55.674 [2024-11-20 13:49:58.343084] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:55.674 [2024-11-20 13:49:58.343110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:55.674 13:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.674 13:49:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:55.674 13:49:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:29:55.674 13:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.674 13:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.674 13:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.674 13:49:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:29:55.674 13:49:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:55.674 13:49:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:29:55.674 13:49:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:55.674 13:49:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:55.674 13:49:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:55.674 13:49:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:55.674 13:49:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:55.674 13:49:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:55.674 13:49:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:55.674 13:49:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:55.674 13:49:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:55.674 13:49:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:55.933 /dev/nbd0 00:29:55.933 13:49:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:55.933 13:49:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:55.933 13:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:55.933 13:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:29:55.933 13:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:55.933 13:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:55.933 13:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:55.933 13:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:29:55.933 13:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:55.933 13:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:55.933 13:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:55.933 1+0 records in 00:29:55.933 1+0 records out 00:29:55.933 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447432 s, 9.2 MB/s 00:29:55.933 13:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:55.933 13:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:29:55.933 13:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:55.933 13:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:55.933 13:49:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:29:55.933 13:49:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:55.933 13:49:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:55.933 13:49:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:29:56.192 /dev/nbd1 00:29:56.192 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:56.460 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:56.460 13:49:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:29:56.460 13:49:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:29:56.460 13:49:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:56.460 13:49:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:56.460 13:49:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:29:56.460 13:49:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:29:56.460 13:49:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:56.460 13:49:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:56.460 13:49:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:56.460 1+0 records in 00:29:56.460 1+0 records out 00:29:56.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365161 s, 11.2 MB/s 00:29:56.460 13:49:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:56.460 13:49:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:29:56.460 13:49:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:56.461 13:49:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:56.461 13:49:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:29:56.461 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:56.461 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:56.461 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:56.461 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:29:56.461 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:56.461 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:56.461 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:56.461 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:56.461 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:56.461 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:29:57.027 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:57.027 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:57.027 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:57.027 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:57.027 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:57.027 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:57.027 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:57.027 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:57.027 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:57.027 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:29:57.287 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:57.287 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:57.287 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:57.287 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:57.287 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:57.287 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:57.287 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:57.287 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:57.287 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:29:57.287 13:49:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82180 00:29:57.287 13:49:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 82180 ']' 00:29:57.287 13:49:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 82180 00:29:57.287 13:49:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:29:57.287 13:50:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:57.287 13:50:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82180 00:29:57.287 13:50:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:57.287 13:50:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:57.287 killing process with pid 82180 00:29:57.287 13:50:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82180' 00:29:57.287 13:50:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 82180 00:29:57.287 Received shutdown signal, test time was about 60.000000 seconds 00:29:57.287 00:29:57.287 Latency(us) 00:29:57.287 [2024-11-20T13:50:00.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.287 [2024-11-20T13:50:00.204Z] =================================================================================================================== 00:29:57.287 [2024-11-20T13:50:00.204Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:57.287 [2024-11-20 13:50:00.034875] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:57.287 13:50:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 82180 00:29:57.546 [2024-11-20 13:50:00.397188] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:58.923 13:50:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:29:58.923 00:29:58.923 real 0m16.922s 00:29:58.923 user 0m21.644s 00:29:58.923 sys 0m2.271s 00:29:58.923 13:50:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:58.923 13:50:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:58.923 ************************************ 00:29:58.923 END TEST raid5f_rebuild_test 00:29:58.923 ************************************ 00:29:58.924 13:50:01 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:29:58.924 13:50:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:29:58.924 13:50:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:58.924 13:50:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:58.924 ************************************ 00:29:58.924 START TEST raid5f_rebuild_test_sb 00:29:58.924 ************************************ 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82632 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82632 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82632 ']' 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:58.924 13:50:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:58.924 [2024-11-20 13:50:01.700335] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:29:58.924 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:58.924 Zero copy mechanism will not be used. 00:29:58.924 [2024-11-20 13:50:01.700574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82632 ] 00:29:59.182 [2024-11-20 13:50:01.897113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.182 [2024-11-20 13:50:02.057271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.441 [2024-11-20 13:50:02.289061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:59.441 [2024-11-20 13:50:02.289122] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:00.009 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:00.009 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:30:00.009 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:00.009 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:00.009 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.009 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.009 BaseBdev1_malloc 00:30:00.009 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.009 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:00.009 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.009 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.009 [2024-11-20 13:50:02.805230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:00.009 [2024-11-20 13:50:02.805315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:00.009 [2024-11-20 13:50:02.805357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:30:00.009 [2024-11-20 13:50:02.805381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:00.009 [2024-11-20 13:50:02.808312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:00.009 [2024-11-20 13:50:02.808384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:00.009 BaseBdev1 00:30:00.009 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.009 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:00.009 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:00.009 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.009 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.009 BaseBdev2_malloc 00:30:00.010 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.010 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:00.010 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.010 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.010 [2024-11-20 13:50:02.859759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:00.010 [2024-11-20 13:50:02.859856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:00.010 [2024-11-20 13:50:02.859909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:30:00.010 [2024-11-20 13:50:02.859933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:00.010 [2024-11-20 13:50:02.862879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:00.010 [2024-11-20 13:50:02.862952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:00.010 BaseBdev2 00:30:00.010 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.010 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:30:00.010 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:00.010 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.010 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.010 BaseBdev3_malloc 00:30:00.010 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.010 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:00.010 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.010 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.269 [2024-11-20 13:50:02.925633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:00.269 [2024-11-20 13:50:02.925715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:00.269 [2024-11-20 13:50:02.925764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:00.269 [2024-11-20 13:50:02.925788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:00.269 [2024-11-20 13:50:02.928909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:00.269 [2024-11-20 13:50:02.928985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:00.269 BaseBdev3 00:30:00.269 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.269 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:30:00.269 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.269 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.269 spare_malloc 00:30:00.269 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.269 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:00.269 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.269 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.269 spare_delay 00:30:00.269 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.269 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:00.269 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.269 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.269 [2024-11-20 13:50:02.988686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:00.269 [2024-11-20 13:50:02.988774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:00.269 [2024-11-20 13:50:02.988803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:30:00.269 [2024-11-20 13:50:02.988822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:00.269 [2024-11-20 13:50:02.991789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:00.269 [2024-11-20 13:50:02.991845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:00.269 spare 00:30:00.269 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.269 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:30:00.269 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.269 13:50:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.269 [2024-11-20 13:50:03.000945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:00.269 [2024-11-20 13:50:03.003579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:00.269 [2024-11-20 13:50:03.003699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:00.269 [2024-11-20 13:50:03.003967] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:30:00.269 [2024-11-20 13:50:03.004016] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:30:00.269 [2024-11-20 13:50:03.004389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:30:00.269 [2024-11-20 13:50:03.009768] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:30:00.269 [2024-11-20 13:50:03.009825] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:30:00.269 [2024-11-20 13:50:03.010133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:00.269 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.269 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:30:00.269 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:00.269 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:00.269 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:00.269 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:00.269 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:00.269 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:00.269 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:00.269 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:00.269 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:00.269 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:00.270 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:00.270 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.270 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.270 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.270 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:00.270 "name": "raid_bdev1", 00:30:00.270 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:00.270 "strip_size_kb": 64, 00:30:00.270 "state": "online", 00:30:00.270 "raid_level": "raid5f", 00:30:00.270 "superblock": true, 00:30:00.270 "num_base_bdevs": 3, 00:30:00.270 "num_base_bdevs_discovered": 3, 00:30:00.270 "num_base_bdevs_operational": 3, 00:30:00.270 "base_bdevs_list": [ 00:30:00.270 { 00:30:00.270 "name": "BaseBdev1", 00:30:00.270 "uuid": "fe1c1d20-f487-5a71-860a-40de210e9228", 00:30:00.270 "is_configured": true, 00:30:00.270 "data_offset": 2048, 00:30:00.270 "data_size": 63488 00:30:00.270 }, 00:30:00.270 { 00:30:00.270 "name": "BaseBdev2", 00:30:00.270 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:00.270 "is_configured": true, 00:30:00.270 "data_offset": 2048, 00:30:00.270 "data_size": 63488 00:30:00.270 }, 00:30:00.270 { 00:30:00.270 "name": "BaseBdev3", 00:30:00.270 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:00.270 "is_configured": true, 00:30:00.270 "data_offset": 2048, 00:30:00.270 "data_size": 63488 00:30:00.270 } 00:30:00.270 ] 00:30:00.270 }' 00:30:00.270 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:00.270 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.838 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:30:00.838 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:00.838 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.838 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.838 [2024-11-20 13:50:03.601389] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:00.838 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.838 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:30:00.838 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:00.838 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.838 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.838 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:00.838 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.838 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:30:00.838 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:30:00.838 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:30:00.838 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:30:00.838 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:30:00.839 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:00.839 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:30:00.839 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:00.839 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:00.839 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:00.839 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:30:00.839 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:00.839 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:00.839 13:50:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:30:01.098 [2024-11-20 13:50:03.989255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:30:01.098 /dev/nbd0 00:30:01.356 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:01.356 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:01.356 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:01.356 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:30:01.356 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:01.356 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:01.356 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:01.356 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:30:01.356 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:01.356 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:01.356 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:01.356 1+0 records in 00:30:01.356 1+0 records out 00:30:01.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329797 s, 12.4 MB/s 00:30:01.356 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:01.356 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:30:01.356 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:01.356 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:01.356 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:30:01.356 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:01.356 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:01.356 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:30:01.356 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:30:01.356 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:30:01.356 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:30:01.921 496+0 records in 00:30:01.921 496+0 records out 00:30:01.921 65011712 bytes (65 MB, 62 MiB) copied, 0.503859 s, 129 MB/s 00:30:01.921 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:30:01.921 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:01.921 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:01.921 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:01.921 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:01.921 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:01.921 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:02.180 [2024-11-20 13:50:04.867032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:02.180 [2024-11-20 13:50:04.879209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:02.180 "name": "raid_bdev1", 00:30:02.180 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:02.180 "strip_size_kb": 64, 00:30:02.180 "state": "online", 00:30:02.180 "raid_level": "raid5f", 00:30:02.180 "superblock": true, 00:30:02.180 "num_base_bdevs": 3, 00:30:02.180 "num_base_bdevs_discovered": 2, 00:30:02.180 "num_base_bdevs_operational": 2, 00:30:02.180 "base_bdevs_list": [ 00:30:02.180 { 00:30:02.180 "name": null, 00:30:02.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.180 "is_configured": false, 00:30:02.180 "data_offset": 0, 00:30:02.180 "data_size": 63488 00:30:02.180 }, 00:30:02.180 { 00:30:02.180 "name": "BaseBdev2", 00:30:02.180 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:02.180 "is_configured": true, 00:30:02.180 "data_offset": 2048, 00:30:02.180 "data_size": 63488 00:30:02.180 }, 00:30:02.180 { 00:30:02.180 "name": "BaseBdev3", 00:30:02.180 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:02.180 "is_configured": true, 00:30:02.180 "data_offset": 2048, 00:30:02.180 "data_size": 63488 00:30:02.180 } 00:30:02.180 ] 00:30:02.180 }' 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:02.180 13:50:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:02.747 13:50:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:02.747 13:50:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.747 13:50:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:02.747 [2024-11-20 13:50:05.391396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:02.747 [2024-11-20 13:50:05.409312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:30:02.747 13:50:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.747 13:50:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:30:02.747 [2024-11-20 13:50:05.417956] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:03.686 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:03.686 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:03.686 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:03.686 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:03.686 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:03.686 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:03.686 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:03.686 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.686 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.686 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.686 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:03.686 "name": "raid_bdev1", 00:30:03.686 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:03.686 "strip_size_kb": 64, 00:30:03.686 "state": "online", 00:30:03.686 "raid_level": "raid5f", 00:30:03.686 "superblock": true, 00:30:03.686 "num_base_bdevs": 3, 00:30:03.686 "num_base_bdevs_discovered": 3, 00:30:03.686 "num_base_bdevs_operational": 3, 00:30:03.686 "process": { 00:30:03.686 "type": "rebuild", 00:30:03.686 "target": "spare", 00:30:03.686 "progress": { 00:30:03.686 "blocks": 18432, 00:30:03.686 "percent": 14 00:30:03.686 } 00:30:03.686 }, 00:30:03.686 "base_bdevs_list": [ 00:30:03.686 { 00:30:03.686 "name": "spare", 00:30:03.686 "uuid": "788c890c-d052-56e3-8548-0f0e9cdfe2cb", 00:30:03.686 "is_configured": true, 00:30:03.686 "data_offset": 2048, 00:30:03.686 "data_size": 63488 00:30:03.686 }, 00:30:03.686 { 00:30:03.686 "name": "BaseBdev2", 00:30:03.686 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:03.686 "is_configured": true, 00:30:03.686 "data_offset": 2048, 00:30:03.686 "data_size": 63488 00:30:03.686 }, 00:30:03.686 { 00:30:03.686 "name": "BaseBdev3", 00:30:03.686 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:03.686 "is_configured": true, 00:30:03.686 "data_offset": 2048, 00:30:03.686 "data_size": 63488 00:30:03.686 } 00:30:03.686 ] 00:30:03.686 }' 00:30:03.686 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:03.686 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:03.686 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:03.686 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:03.686 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:30:03.686 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.686 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.686 [2024-11-20 13:50:06.574463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:03.946 [2024-11-20 13:50:06.638930] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:03.946 [2024-11-20 13:50:06.639085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:03.946 [2024-11-20 13:50:06.639131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:03.946 [2024-11-20 13:50:06.639157] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:03.946 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.946 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:30:03.946 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:03.946 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:03.946 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:03.946 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:03.946 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:03.946 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:03.946 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:03.946 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:03.946 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:03.946 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:03.946 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:03.946 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.946 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.946 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.946 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:03.946 "name": "raid_bdev1", 00:30:03.946 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:03.946 "strip_size_kb": 64, 00:30:03.946 "state": "online", 00:30:03.946 "raid_level": "raid5f", 00:30:03.946 "superblock": true, 00:30:03.946 "num_base_bdevs": 3, 00:30:03.946 "num_base_bdevs_discovered": 2, 00:30:03.946 "num_base_bdevs_operational": 2, 00:30:03.946 "base_bdevs_list": [ 00:30:03.946 { 00:30:03.946 "name": null, 00:30:03.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:03.946 "is_configured": false, 00:30:03.946 "data_offset": 0, 00:30:03.946 "data_size": 63488 00:30:03.946 }, 00:30:03.946 { 00:30:03.946 "name": "BaseBdev2", 00:30:03.946 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:03.946 "is_configured": true, 00:30:03.946 "data_offset": 2048, 00:30:03.946 "data_size": 63488 00:30:03.946 }, 00:30:03.946 { 00:30:03.946 "name": "BaseBdev3", 00:30:03.946 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:03.946 "is_configured": true, 00:30:03.946 "data_offset": 2048, 00:30:03.946 "data_size": 63488 00:30:03.946 } 00:30:03.946 ] 00:30:03.946 }' 00:30:03.946 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:03.946 13:50:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:04.543 13:50:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:04.543 13:50:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:04.543 13:50:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:04.543 13:50:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:04.543 13:50:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:04.543 13:50:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:04.543 13:50:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:04.543 13:50:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.543 13:50:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:04.544 13:50:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.544 13:50:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:04.544 "name": "raid_bdev1", 00:30:04.544 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:04.544 "strip_size_kb": 64, 00:30:04.544 "state": "online", 00:30:04.544 "raid_level": "raid5f", 00:30:04.544 "superblock": true, 00:30:04.544 "num_base_bdevs": 3, 00:30:04.544 "num_base_bdevs_discovered": 2, 00:30:04.544 "num_base_bdevs_operational": 2, 00:30:04.544 "base_bdevs_list": [ 00:30:04.544 { 00:30:04.544 "name": null, 00:30:04.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:04.544 "is_configured": false, 00:30:04.544 "data_offset": 0, 00:30:04.544 "data_size": 63488 00:30:04.544 }, 00:30:04.544 { 00:30:04.544 "name": "BaseBdev2", 00:30:04.544 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:04.544 "is_configured": true, 00:30:04.544 "data_offset": 2048, 00:30:04.544 "data_size": 63488 00:30:04.544 }, 00:30:04.544 { 00:30:04.544 "name": "BaseBdev3", 00:30:04.544 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:04.544 "is_configured": true, 00:30:04.544 "data_offset": 2048, 00:30:04.544 "data_size": 63488 00:30:04.544 } 00:30:04.544 ] 00:30:04.544 }' 00:30:04.544 13:50:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:04.544 13:50:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:04.544 13:50:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:04.544 13:50:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:04.544 13:50:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:04.544 13:50:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.544 13:50:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:04.544 [2024-11-20 13:50:07.394495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:04.544 [2024-11-20 13:50:07.411008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:30:04.544 13:50:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.544 13:50:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:30:04.544 [2024-11-20 13:50:07.419375] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:05.921 "name": "raid_bdev1", 00:30:05.921 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:05.921 "strip_size_kb": 64, 00:30:05.921 "state": "online", 00:30:05.921 "raid_level": "raid5f", 00:30:05.921 "superblock": true, 00:30:05.921 "num_base_bdevs": 3, 00:30:05.921 "num_base_bdevs_discovered": 3, 00:30:05.921 "num_base_bdevs_operational": 3, 00:30:05.921 "process": { 00:30:05.921 "type": "rebuild", 00:30:05.921 "target": "spare", 00:30:05.921 "progress": { 00:30:05.921 "blocks": 18432, 00:30:05.921 "percent": 14 00:30:05.921 } 00:30:05.921 }, 00:30:05.921 "base_bdevs_list": [ 00:30:05.921 { 00:30:05.921 "name": "spare", 00:30:05.921 "uuid": "788c890c-d052-56e3-8548-0f0e9cdfe2cb", 00:30:05.921 "is_configured": true, 00:30:05.921 "data_offset": 2048, 00:30:05.921 "data_size": 63488 00:30:05.921 }, 00:30:05.921 { 00:30:05.921 "name": "BaseBdev2", 00:30:05.921 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:05.921 "is_configured": true, 00:30:05.921 "data_offset": 2048, 00:30:05.921 "data_size": 63488 00:30:05.921 }, 00:30:05.921 { 00:30:05.921 "name": "BaseBdev3", 00:30:05.921 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:05.921 "is_configured": true, 00:30:05.921 "data_offset": 2048, 00:30:05.921 "data_size": 63488 00:30:05.921 } 00:30:05.921 ] 00:30:05.921 }' 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:30:05.921 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=619 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:05.921 "name": "raid_bdev1", 00:30:05.921 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:05.921 "strip_size_kb": 64, 00:30:05.921 "state": "online", 00:30:05.921 "raid_level": "raid5f", 00:30:05.921 "superblock": true, 00:30:05.921 "num_base_bdevs": 3, 00:30:05.921 "num_base_bdevs_discovered": 3, 00:30:05.921 "num_base_bdevs_operational": 3, 00:30:05.921 "process": { 00:30:05.921 "type": "rebuild", 00:30:05.921 "target": "spare", 00:30:05.921 "progress": { 00:30:05.921 "blocks": 22528, 00:30:05.921 "percent": 17 00:30:05.921 } 00:30:05.921 }, 00:30:05.921 "base_bdevs_list": [ 00:30:05.921 { 00:30:05.921 "name": "spare", 00:30:05.921 "uuid": "788c890c-d052-56e3-8548-0f0e9cdfe2cb", 00:30:05.921 "is_configured": true, 00:30:05.921 "data_offset": 2048, 00:30:05.921 "data_size": 63488 00:30:05.921 }, 00:30:05.921 { 00:30:05.921 "name": "BaseBdev2", 00:30:05.921 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:05.921 "is_configured": true, 00:30:05.921 "data_offset": 2048, 00:30:05.921 "data_size": 63488 00:30:05.921 }, 00:30:05.921 { 00:30:05.921 "name": "BaseBdev3", 00:30:05.921 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:05.921 "is_configured": true, 00:30:05.921 "data_offset": 2048, 00:30:05.921 "data_size": 63488 00:30:05.921 } 00:30:05.921 ] 00:30:05.921 }' 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:05.921 13:50:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:06.858 13:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:06.858 13:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:06.858 13:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:06.858 13:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:06.858 13:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:06.858 13:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:06.858 13:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:06.858 13:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.858 13:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:06.858 13:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:07.117 13:50:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.117 13:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:07.117 "name": "raid_bdev1", 00:30:07.117 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:07.117 "strip_size_kb": 64, 00:30:07.117 "state": "online", 00:30:07.117 "raid_level": "raid5f", 00:30:07.117 "superblock": true, 00:30:07.117 "num_base_bdevs": 3, 00:30:07.117 "num_base_bdevs_discovered": 3, 00:30:07.117 "num_base_bdevs_operational": 3, 00:30:07.117 "process": { 00:30:07.117 "type": "rebuild", 00:30:07.117 "target": "spare", 00:30:07.117 "progress": { 00:30:07.117 "blocks": 47104, 00:30:07.117 "percent": 37 00:30:07.117 } 00:30:07.117 }, 00:30:07.117 "base_bdevs_list": [ 00:30:07.117 { 00:30:07.117 "name": "spare", 00:30:07.117 "uuid": "788c890c-d052-56e3-8548-0f0e9cdfe2cb", 00:30:07.117 "is_configured": true, 00:30:07.117 "data_offset": 2048, 00:30:07.117 "data_size": 63488 00:30:07.117 }, 00:30:07.117 { 00:30:07.117 "name": "BaseBdev2", 00:30:07.117 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:07.117 "is_configured": true, 00:30:07.117 "data_offset": 2048, 00:30:07.117 "data_size": 63488 00:30:07.118 }, 00:30:07.118 { 00:30:07.118 "name": "BaseBdev3", 00:30:07.118 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:07.118 "is_configured": true, 00:30:07.118 "data_offset": 2048, 00:30:07.118 "data_size": 63488 00:30:07.118 } 00:30:07.118 ] 00:30:07.118 }' 00:30:07.118 13:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:07.118 13:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:07.118 13:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:07.118 13:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:07.118 13:50:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:08.055 13:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:08.055 13:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:08.055 13:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:08.055 13:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:08.055 13:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:08.055 13:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:08.055 13:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:08.055 13:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:08.055 13:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.055 13:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:08.055 13:50:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.315 13:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:08.315 "name": "raid_bdev1", 00:30:08.315 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:08.315 "strip_size_kb": 64, 00:30:08.315 "state": "online", 00:30:08.315 "raid_level": "raid5f", 00:30:08.315 "superblock": true, 00:30:08.315 "num_base_bdevs": 3, 00:30:08.315 "num_base_bdevs_discovered": 3, 00:30:08.315 "num_base_bdevs_operational": 3, 00:30:08.315 "process": { 00:30:08.315 "type": "rebuild", 00:30:08.315 "target": "spare", 00:30:08.315 "progress": { 00:30:08.315 "blocks": 69632, 00:30:08.315 "percent": 54 00:30:08.315 } 00:30:08.315 }, 00:30:08.315 "base_bdevs_list": [ 00:30:08.315 { 00:30:08.315 "name": "spare", 00:30:08.315 "uuid": "788c890c-d052-56e3-8548-0f0e9cdfe2cb", 00:30:08.315 "is_configured": true, 00:30:08.315 "data_offset": 2048, 00:30:08.315 "data_size": 63488 00:30:08.315 }, 00:30:08.315 { 00:30:08.315 "name": "BaseBdev2", 00:30:08.315 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:08.315 "is_configured": true, 00:30:08.315 "data_offset": 2048, 00:30:08.315 "data_size": 63488 00:30:08.315 }, 00:30:08.315 { 00:30:08.315 "name": "BaseBdev3", 00:30:08.315 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:08.315 "is_configured": true, 00:30:08.315 "data_offset": 2048, 00:30:08.315 "data_size": 63488 00:30:08.315 } 00:30:08.315 ] 00:30:08.315 }' 00:30:08.315 13:50:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:08.315 13:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:08.315 13:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:08.315 13:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:08.315 13:50:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:09.251 13:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:09.251 13:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:09.251 13:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:09.251 13:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:09.251 13:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:09.251 13:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:09.251 13:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:09.251 13:50:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.251 13:50:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.251 13:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.251 13:50:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.251 13:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:09.251 "name": "raid_bdev1", 00:30:09.251 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:09.251 "strip_size_kb": 64, 00:30:09.251 "state": "online", 00:30:09.251 "raid_level": "raid5f", 00:30:09.251 "superblock": true, 00:30:09.251 "num_base_bdevs": 3, 00:30:09.251 "num_base_bdevs_discovered": 3, 00:30:09.251 "num_base_bdevs_operational": 3, 00:30:09.252 "process": { 00:30:09.252 "type": "rebuild", 00:30:09.252 "target": "spare", 00:30:09.252 "progress": { 00:30:09.252 "blocks": 94208, 00:30:09.252 "percent": 74 00:30:09.252 } 00:30:09.252 }, 00:30:09.252 "base_bdevs_list": [ 00:30:09.252 { 00:30:09.252 "name": "spare", 00:30:09.252 "uuid": "788c890c-d052-56e3-8548-0f0e9cdfe2cb", 00:30:09.252 "is_configured": true, 00:30:09.252 "data_offset": 2048, 00:30:09.252 "data_size": 63488 00:30:09.252 }, 00:30:09.252 { 00:30:09.252 "name": "BaseBdev2", 00:30:09.252 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:09.252 "is_configured": true, 00:30:09.252 "data_offset": 2048, 00:30:09.252 "data_size": 63488 00:30:09.252 }, 00:30:09.252 { 00:30:09.252 "name": "BaseBdev3", 00:30:09.252 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:09.252 "is_configured": true, 00:30:09.252 "data_offset": 2048, 00:30:09.252 "data_size": 63488 00:30:09.252 } 00:30:09.252 ] 00:30:09.252 }' 00:30:09.252 13:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:09.510 13:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:09.510 13:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:09.510 13:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:09.510 13:50:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:10.444 13:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:10.444 13:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:10.444 13:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:10.444 13:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:10.444 13:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:10.444 13:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:10.444 13:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:10.444 13:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.444 13:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:10.444 13:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.444 13:50:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.444 13:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:10.444 "name": "raid_bdev1", 00:30:10.444 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:10.444 "strip_size_kb": 64, 00:30:10.444 "state": "online", 00:30:10.444 "raid_level": "raid5f", 00:30:10.444 "superblock": true, 00:30:10.444 "num_base_bdevs": 3, 00:30:10.444 "num_base_bdevs_discovered": 3, 00:30:10.444 "num_base_bdevs_operational": 3, 00:30:10.444 "process": { 00:30:10.444 "type": "rebuild", 00:30:10.444 "target": "spare", 00:30:10.444 "progress": { 00:30:10.444 "blocks": 116736, 00:30:10.444 "percent": 91 00:30:10.444 } 00:30:10.444 }, 00:30:10.444 "base_bdevs_list": [ 00:30:10.444 { 00:30:10.444 "name": "spare", 00:30:10.444 "uuid": "788c890c-d052-56e3-8548-0f0e9cdfe2cb", 00:30:10.444 "is_configured": true, 00:30:10.444 "data_offset": 2048, 00:30:10.444 "data_size": 63488 00:30:10.444 }, 00:30:10.444 { 00:30:10.444 "name": "BaseBdev2", 00:30:10.444 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:10.444 "is_configured": true, 00:30:10.444 "data_offset": 2048, 00:30:10.444 "data_size": 63488 00:30:10.444 }, 00:30:10.444 { 00:30:10.444 "name": "BaseBdev3", 00:30:10.444 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:10.444 "is_configured": true, 00:30:10.444 "data_offset": 2048, 00:30:10.444 "data_size": 63488 00:30:10.444 } 00:30:10.444 ] 00:30:10.444 }' 00:30:10.444 13:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:10.703 13:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:10.703 13:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:10.703 13:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:10.703 13:50:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:30:10.961 [2024-11-20 13:50:13.712777] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:10.961 [2024-11-20 13:50:13.712907] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:10.961 [2024-11-20 13:50:13.713122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:11.896 "name": "raid_bdev1", 00:30:11.896 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:11.896 "strip_size_kb": 64, 00:30:11.896 "state": "online", 00:30:11.896 "raid_level": "raid5f", 00:30:11.896 "superblock": true, 00:30:11.896 "num_base_bdevs": 3, 00:30:11.896 "num_base_bdevs_discovered": 3, 00:30:11.896 "num_base_bdevs_operational": 3, 00:30:11.896 "base_bdevs_list": [ 00:30:11.896 { 00:30:11.896 "name": "spare", 00:30:11.896 "uuid": "788c890c-d052-56e3-8548-0f0e9cdfe2cb", 00:30:11.896 "is_configured": true, 00:30:11.896 "data_offset": 2048, 00:30:11.896 "data_size": 63488 00:30:11.896 }, 00:30:11.896 { 00:30:11.896 "name": "BaseBdev2", 00:30:11.896 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:11.896 "is_configured": true, 00:30:11.896 "data_offset": 2048, 00:30:11.896 "data_size": 63488 00:30:11.896 }, 00:30:11.896 { 00:30:11.896 "name": "BaseBdev3", 00:30:11.896 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:11.896 "is_configured": true, 00:30:11.896 "data_offset": 2048, 00:30:11.896 "data_size": 63488 00:30:11.896 } 00:30:11.896 ] 00:30:11.896 }' 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:11.896 "name": "raid_bdev1", 00:30:11.896 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:11.896 "strip_size_kb": 64, 00:30:11.896 "state": "online", 00:30:11.896 "raid_level": "raid5f", 00:30:11.896 "superblock": true, 00:30:11.896 "num_base_bdevs": 3, 00:30:11.896 "num_base_bdevs_discovered": 3, 00:30:11.896 "num_base_bdevs_operational": 3, 00:30:11.896 "base_bdevs_list": [ 00:30:11.896 { 00:30:11.896 "name": "spare", 00:30:11.896 "uuid": "788c890c-d052-56e3-8548-0f0e9cdfe2cb", 00:30:11.896 "is_configured": true, 00:30:11.896 "data_offset": 2048, 00:30:11.896 "data_size": 63488 00:30:11.896 }, 00:30:11.896 { 00:30:11.896 "name": "BaseBdev2", 00:30:11.896 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:11.896 "is_configured": true, 00:30:11.896 "data_offset": 2048, 00:30:11.896 "data_size": 63488 00:30:11.896 }, 00:30:11.896 { 00:30:11.896 "name": "BaseBdev3", 00:30:11.896 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:11.896 "is_configured": true, 00:30:11.896 "data_offset": 2048, 00:30:11.896 "data_size": 63488 00:30:11.896 } 00:30:11.896 ] 00:30:11.896 }' 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.896 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.155 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:12.155 "name": "raid_bdev1", 00:30:12.155 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:12.155 "strip_size_kb": 64, 00:30:12.155 "state": "online", 00:30:12.155 "raid_level": "raid5f", 00:30:12.155 "superblock": true, 00:30:12.155 "num_base_bdevs": 3, 00:30:12.155 "num_base_bdevs_discovered": 3, 00:30:12.155 "num_base_bdevs_operational": 3, 00:30:12.155 "base_bdevs_list": [ 00:30:12.155 { 00:30:12.155 "name": "spare", 00:30:12.155 "uuid": "788c890c-d052-56e3-8548-0f0e9cdfe2cb", 00:30:12.155 "is_configured": true, 00:30:12.155 "data_offset": 2048, 00:30:12.155 "data_size": 63488 00:30:12.155 }, 00:30:12.155 { 00:30:12.155 "name": "BaseBdev2", 00:30:12.155 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:12.155 "is_configured": true, 00:30:12.155 "data_offset": 2048, 00:30:12.155 "data_size": 63488 00:30:12.155 }, 00:30:12.155 { 00:30:12.155 "name": "BaseBdev3", 00:30:12.155 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:12.155 "is_configured": true, 00:30:12.155 "data_offset": 2048, 00:30:12.155 "data_size": 63488 00:30:12.155 } 00:30:12.155 ] 00:30:12.155 }' 00:30:12.155 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:12.155 13:50:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.413 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:12.413 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.413 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.413 [2024-11-20 13:50:15.325381] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:12.413 [2024-11-20 13:50:15.325421] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:12.413 [2024-11-20 13:50:15.325531] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:12.413 [2024-11-20 13:50:15.325637] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:12.413 [2024-11-20 13:50:15.325672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:30:12.672 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.672 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:12.672 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.672 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.672 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:30:12.672 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.672 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:30:12.672 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:30:12.672 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:30:12.672 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:30:12.672 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:30:12.672 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:30:12.672 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:12.672 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:12.672 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:12.672 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:30:12.672 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:12.672 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:12.672 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:30:12.931 /dev/nbd0 00:30:12.931 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:12.931 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:12.931 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:30:12.931 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:30:12.931 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:12.931 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:12.931 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:30:12.931 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:30:12.931 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:12.931 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:12.931 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:12.931 1+0 records in 00:30:12.931 1+0 records out 00:30:12.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291075 s, 14.1 MB/s 00:30:12.931 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:12.931 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:30:12.931 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:12.931 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:12.931 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:30:12.931 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:12.931 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:12.931 13:50:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:30:13.189 /dev/nbd1 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:13.447 1+0 records in 00:30:13.447 1+0 records out 00:30:13.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453089 s, 9.0 MB/s 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:13.447 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:30:14.015 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:14.015 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:14.015 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:14.015 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:14.015 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:14.015 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:14.015 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:14.015 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:14.015 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:14.015 13:50:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:30:14.272 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:14.272 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:14.273 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:14.273 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:14.273 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:14.273 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:14.273 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:14.273 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:14.273 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:30:14.273 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:30:14.273 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.273 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.273 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.273 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:14.273 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.273 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.273 [2024-11-20 13:50:17.118346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:14.273 [2024-11-20 13:50:17.118458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:14.273 [2024-11-20 13:50:17.118490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:30:14.273 [2024-11-20 13:50:17.118513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:14.273 [2024-11-20 13:50:17.121680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:14.273 [2024-11-20 13:50:17.121759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:14.273 [2024-11-20 13:50:17.121892] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:14.273 [2024-11-20 13:50:17.122010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:14.273 [2024-11-20 13:50:17.122198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:14.273 [2024-11-20 13:50:17.122383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:14.273 spare 00:30:14.273 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.273 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:30:14.273 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.273 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.531 [2024-11-20 13:50:17.222592] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:30:14.531 [2024-11-20 13:50:17.222684] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:30:14.531 [2024-11-20 13:50:17.223118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:30:14.531 [2024-11-20 13:50:17.227831] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:30:14.531 [2024-11-20 13:50:17.227861] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:30:14.531 [2024-11-20 13:50:17.228184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:14.531 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.531 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:30:14.531 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:14.531 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:14.531 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:14.531 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:14.531 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:14.531 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:14.531 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:14.531 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:14.531 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:14.531 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:14.531 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.531 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.531 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:14.531 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.531 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:14.531 "name": "raid_bdev1", 00:30:14.531 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:14.531 "strip_size_kb": 64, 00:30:14.531 "state": "online", 00:30:14.531 "raid_level": "raid5f", 00:30:14.531 "superblock": true, 00:30:14.531 "num_base_bdevs": 3, 00:30:14.531 "num_base_bdevs_discovered": 3, 00:30:14.531 "num_base_bdevs_operational": 3, 00:30:14.531 "base_bdevs_list": [ 00:30:14.531 { 00:30:14.531 "name": "spare", 00:30:14.531 "uuid": "788c890c-d052-56e3-8548-0f0e9cdfe2cb", 00:30:14.531 "is_configured": true, 00:30:14.531 "data_offset": 2048, 00:30:14.531 "data_size": 63488 00:30:14.531 }, 00:30:14.531 { 00:30:14.531 "name": "BaseBdev2", 00:30:14.531 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:14.531 "is_configured": true, 00:30:14.531 "data_offset": 2048, 00:30:14.531 "data_size": 63488 00:30:14.531 }, 00:30:14.531 { 00:30:14.531 "name": "BaseBdev3", 00:30:14.531 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:14.531 "is_configured": true, 00:30:14.531 "data_offset": 2048, 00:30:14.531 "data_size": 63488 00:30:14.531 } 00:30:14.531 ] 00:30:14.531 }' 00:30:14.531 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:14.531 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.098 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:15.099 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:15.099 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:15.099 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:15.099 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:15.099 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:15.099 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.099 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.099 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:15.099 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.099 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:15.099 "name": "raid_bdev1", 00:30:15.099 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:15.099 "strip_size_kb": 64, 00:30:15.099 "state": "online", 00:30:15.099 "raid_level": "raid5f", 00:30:15.099 "superblock": true, 00:30:15.099 "num_base_bdevs": 3, 00:30:15.099 "num_base_bdevs_discovered": 3, 00:30:15.099 "num_base_bdevs_operational": 3, 00:30:15.099 "base_bdevs_list": [ 00:30:15.099 { 00:30:15.099 "name": "spare", 00:30:15.099 "uuid": "788c890c-d052-56e3-8548-0f0e9cdfe2cb", 00:30:15.099 "is_configured": true, 00:30:15.099 "data_offset": 2048, 00:30:15.099 "data_size": 63488 00:30:15.099 }, 00:30:15.099 { 00:30:15.099 "name": "BaseBdev2", 00:30:15.099 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:15.099 "is_configured": true, 00:30:15.099 "data_offset": 2048, 00:30:15.099 "data_size": 63488 00:30:15.099 }, 00:30:15.099 { 00:30:15.099 "name": "BaseBdev3", 00:30:15.099 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:15.099 "is_configured": true, 00:30:15.099 "data_offset": 2048, 00:30:15.099 "data_size": 63488 00:30:15.099 } 00:30:15.099 ] 00:30:15.099 }' 00:30:15.099 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:15.099 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:15.099 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:15.099 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:15.099 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:15.099 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:15.099 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.099 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.099 13:50:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.358 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:30:15.358 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:30:15.358 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.358 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.358 [2024-11-20 13:50:18.021701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:15.358 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.358 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:30:15.358 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:15.358 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:15.358 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:15.358 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:15.358 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:15.358 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:15.358 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:15.358 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:15.358 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:15.358 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:15.358 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:15.358 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.358 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.358 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.358 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:15.358 "name": "raid_bdev1", 00:30:15.358 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:15.358 "strip_size_kb": 64, 00:30:15.358 "state": "online", 00:30:15.358 "raid_level": "raid5f", 00:30:15.358 "superblock": true, 00:30:15.358 "num_base_bdevs": 3, 00:30:15.358 "num_base_bdevs_discovered": 2, 00:30:15.358 "num_base_bdevs_operational": 2, 00:30:15.358 "base_bdevs_list": [ 00:30:15.358 { 00:30:15.358 "name": null, 00:30:15.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:15.358 "is_configured": false, 00:30:15.358 "data_offset": 0, 00:30:15.358 "data_size": 63488 00:30:15.358 }, 00:30:15.358 { 00:30:15.358 "name": "BaseBdev2", 00:30:15.358 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:15.358 "is_configured": true, 00:30:15.358 "data_offset": 2048, 00:30:15.358 "data_size": 63488 00:30:15.358 }, 00:30:15.358 { 00:30:15.358 "name": "BaseBdev3", 00:30:15.358 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:15.358 "is_configured": true, 00:30:15.358 "data_offset": 2048, 00:30:15.358 "data_size": 63488 00:30:15.358 } 00:30:15.358 ] 00:30:15.358 }' 00:30:15.358 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:15.358 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.924 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:30:15.924 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.924 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.924 [2024-11-20 13:50:18.561875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:15.924 [2024-11-20 13:50:18.562172] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:15.924 [2024-11-20 13:50:18.562231] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:15.924 [2024-11-20 13:50:18.562303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:15.924 [2024-11-20 13:50:18.576080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:30:15.924 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.924 13:50:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:30:15.924 [2024-11-20 13:50:18.582774] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:16.858 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:16.858 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:16.858 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:16.858 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:16.858 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:16.858 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:16.858 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.858 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.858 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:16.858 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.858 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:16.858 "name": "raid_bdev1", 00:30:16.858 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:16.858 "strip_size_kb": 64, 00:30:16.858 "state": "online", 00:30:16.858 "raid_level": "raid5f", 00:30:16.858 "superblock": true, 00:30:16.858 "num_base_bdevs": 3, 00:30:16.858 "num_base_bdevs_discovered": 3, 00:30:16.858 "num_base_bdevs_operational": 3, 00:30:16.858 "process": { 00:30:16.858 "type": "rebuild", 00:30:16.858 "target": "spare", 00:30:16.858 "progress": { 00:30:16.858 "blocks": 18432, 00:30:16.858 "percent": 14 00:30:16.858 } 00:30:16.858 }, 00:30:16.858 "base_bdevs_list": [ 00:30:16.858 { 00:30:16.858 "name": "spare", 00:30:16.858 "uuid": "788c890c-d052-56e3-8548-0f0e9cdfe2cb", 00:30:16.858 "is_configured": true, 00:30:16.858 "data_offset": 2048, 00:30:16.858 "data_size": 63488 00:30:16.858 }, 00:30:16.858 { 00:30:16.858 "name": "BaseBdev2", 00:30:16.858 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:16.858 "is_configured": true, 00:30:16.858 "data_offset": 2048, 00:30:16.858 "data_size": 63488 00:30:16.858 }, 00:30:16.858 { 00:30:16.858 "name": "BaseBdev3", 00:30:16.858 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:16.858 "is_configured": true, 00:30:16.858 "data_offset": 2048, 00:30:16.858 "data_size": 63488 00:30:16.858 } 00:30:16.858 ] 00:30:16.858 }' 00:30:16.858 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:16.858 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:16.858 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:16.858 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:16.858 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:30:16.858 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.858 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.858 [2024-11-20 13:50:19.740763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:17.116 [2024-11-20 13:50:19.799596] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:17.116 [2024-11-20 13:50:19.799718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:17.116 [2024-11-20 13:50:19.799744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:17.116 [2024-11-20 13:50:19.799759] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:17.116 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.116 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:30:17.116 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:17.116 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:17.116 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:17.116 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:17.116 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:17.116 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:17.116 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:17.116 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:17.116 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:17.116 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:17.116 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.116 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:17.116 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:17.116 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.116 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:17.116 "name": "raid_bdev1", 00:30:17.116 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:17.116 "strip_size_kb": 64, 00:30:17.116 "state": "online", 00:30:17.116 "raid_level": "raid5f", 00:30:17.116 "superblock": true, 00:30:17.116 "num_base_bdevs": 3, 00:30:17.116 "num_base_bdevs_discovered": 2, 00:30:17.116 "num_base_bdevs_operational": 2, 00:30:17.116 "base_bdevs_list": [ 00:30:17.116 { 00:30:17.116 "name": null, 00:30:17.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:17.116 "is_configured": false, 00:30:17.116 "data_offset": 0, 00:30:17.116 "data_size": 63488 00:30:17.116 }, 00:30:17.116 { 00:30:17.116 "name": "BaseBdev2", 00:30:17.116 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:17.116 "is_configured": true, 00:30:17.116 "data_offset": 2048, 00:30:17.116 "data_size": 63488 00:30:17.116 }, 00:30:17.116 { 00:30:17.116 "name": "BaseBdev3", 00:30:17.116 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:17.116 "is_configured": true, 00:30:17.116 "data_offset": 2048, 00:30:17.116 "data_size": 63488 00:30:17.116 } 00:30:17.116 ] 00:30:17.116 }' 00:30:17.116 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:17.116 13:50:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:17.683 13:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:30:17.683 13:50:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.683 13:50:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:17.683 [2024-11-20 13:50:20.356766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:17.683 [2024-11-20 13:50:20.356852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:17.683 [2024-11-20 13:50:20.356884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:30:17.683 [2024-11-20 13:50:20.356923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:17.683 [2024-11-20 13:50:20.357586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:17.683 [2024-11-20 13:50:20.357662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:17.683 [2024-11-20 13:50:20.357781] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:17.683 [2024-11-20 13:50:20.357806] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:30:17.683 [2024-11-20 13:50:20.357820] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:17.683 [2024-11-20 13:50:20.357904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:17.683 [2024-11-20 13:50:20.373340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:30:17.683 spare 00:30:17.683 13:50:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.683 13:50:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:30:17.683 [2024-11-20 13:50:20.380844] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:18.618 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:18.618 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:18.618 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:30:18.618 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:30:18.618 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:18.618 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:18.618 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:18.618 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.618 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:18.618 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.618 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:18.618 "name": "raid_bdev1", 00:30:18.618 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:18.618 "strip_size_kb": 64, 00:30:18.618 "state": "online", 00:30:18.618 "raid_level": "raid5f", 00:30:18.618 "superblock": true, 00:30:18.618 "num_base_bdevs": 3, 00:30:18.618 "num_base_bdevs_discovered": 3, 00:30:18.618 "num_base_bdevs_operational": 3, 00:30:18.618 "process": { 00:30:18.618 "type": "rebuild", 00:30:18.618 "target": "spare", 00:30:18.618 "progress": { 00:30:18.618 "blocks": 18432, 00:30:18.618 "percent": 14 00:30:18.618 } 00:30:18.618 }, 00:30:18.618 "base_bdevs_list": [ 00:30:18.618 { 00:30:18.618 "name": "spare", 00:30:18.618 "uuid": "788c890c-d052-56e3-8548-0f0e9cdfe2cb", 00:30:18.618 "is_configured": true, 00:30:18.618 "data_offset": 2048, 00:30:18.618 "data_size": 63488 00:30:18.618 }, 00:30:18.618 { 00:30:18.618 "name": "BaseBdev2", 00:30:18.618 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:18.618 "is_configured": true, 00:30:18.618 "data_offset": 2048, 00:30:18.618 "data_size": 63488 00:30:18.618 }, 00:30:18.618 { 00:30:18.618 "name": "BaseBdev3", 00:30:18.618 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:18.618 "is_configured": true, 00:30:18.618 "data_offset": 2048, 00:30:18.618 "data_size": 63488 00:30:18.618 } 00:30:18.618 ] 00:30:18.618 }' 00:30:18.618 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:18.618 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:18.618 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:18.877 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:30:18.877 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:30:18.877 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.877 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:18.877 [2024-11-20 13:50:21.543492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:18.877 [2024-11-20 13:50:21.596849] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:18.877 [2024-11-20 13:50:21.597002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:18.877 [2024-11-20 13:50:21.597031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:18.877 [2024-11-20 13:50:21.597043] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:18.877 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.877 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:30:18.877 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:18.877 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:18.877 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:18.877 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:18.877 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:18.877 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:18.877 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:18.877 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:18.878 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:18.878 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:18.878 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:18.878 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.878 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:18.878 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.878 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:18.878 "name": "raid_bdev1", 00:30:18.878 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:18.878 "strip_size_kb": 64, 00:30:18.878 "state": "online", 00:30:18.878 "raid_level": "raid5f", 00:30:18.878 "superblock": true, 00:30:18.878 "num_base_bdevs": 3, 00:30:18.878 "num_base_bdevs_discovered": 2, 00:30:18.878 "num_base_bdevs_operational": 2, 00:30:18.878 "base_bdevs_list": [ 00:30:18.878 { 00:30:18.878 "name": null, 00:30:18.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:18.878 "is_configured": false, 00:30:18.878 "data_offset": 0, 00:30:18.878 "data_size": 63488 00:30:18.878 }, 00:30:18.878 { 00:30:18.878 "name": "BaseBdev2", 00:30:18.878 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:18.878 "is_configured": true, 00:30:18.878 "data_offset": 2048, 00:30:18.878 "data_size": 63488 00:30:18.878 }, 00:30:18.878 { 00:30:18.878 "name": "BaseBdev3", 00:30:18.878 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:18.878 "is_configured": true, 00:30:18.878 "data_offset": 2048, 00:30:18.878 "data_size": 63488 00:30:18.878 } 00:30:18.878 ] 00:30:18.878 }' 00:30:18.878 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:18.878 13:50:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:19.453 "name": "raid_bdev1", 00:30:19.453 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:19.453 "strip_size_kb": 64, 00:30:19.453 "state": "online", 00:30:19.453 "raid_level": "raid5f", 00:30:19.453 "superblock": true, 00:30:19.453 "num_base_bdevs": 3, 00:30:19.453 "num_base_bdevs_discovered": 2, 00:30:19.453 "num_base_bdevs_operational": 2, 00:30:19.453 "base_bdevs_list": [ 00:30:19.453 { 00:30:19.453 "name": null, 00:30:19.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:19.453 "is_configured": false, 00:30:19.453 "data_offset": 0, 00:30:19.453 "data_size": 63488 00:30:19.453 }, 00:30:19.453 { 00:30:19.453 "name": "BaseBdev2", 00:30:19.453 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:19.453 "is_configured": true, 00:30:19.453 "data_offset": 2048, 00:30:19.453 "data_size": 63488 00:30:19.453 }, 00:30:19.453 { 00:30:19.453 "name": "BaseBdev3", 00:30:19.453 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:19.453 "is_configured": true, 00:30:19.453 "data_offset": 2048, 00:30:19.453 "data_size": 63488 00:30:19.453 } 00:30:19.453 ] 00:30:19.453 }' 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:19.453 [2024-11-20 13:50:22.338631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:19.453 [2024-11-20 13:50:22.338697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:19.453 [2024-11-20 13:50:22.338734] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:30:19.453 [2024-11-20 13:50:22.338749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:19.453 [2024-11-20 13:50:22.339389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:19.453 [2024-11-20 13:50:22.339440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:19.453 [2024-11-20 13:50:22.339548] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:19.453 [2024-11-20 13:50:22.339569] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:30:19.453 [2024-11-20 13:50:22.339597] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:19.453 [2024-11-20 13:50:22.339611] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:30:19.453 BaseBdev1 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.453 13:50:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:30:20.841 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:30:20.841 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:20.841 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:20.841 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:20.841 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:20.841 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:20.841 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:20.841 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:20.841 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:20.841 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:20.841 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:20.841 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:20.841 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.841 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:20.841 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.841 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:20.841 "name": "raid_bdev1", 00:30:20.841 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:20.841 "strip_size_kb": 64, 00:30:20.841 "state": "online", 00:30:20.841 "raid_level": "raid5f", 00:30:20.841 "superblock": true, 00:30:20.841 "num_base_bdevs": 3, 00:30:20.841 "num_base_bdevs_discovered": 2, 00:30:20.841 "num_base_bdevs_operational": 2, 00:30:20.841 "base_bdevs_list": [ 00:30:20.841 { 00:30:20.841 "name": null, 00:30:20.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:20.841 "is_configured": false, 00:30:20.841 "data_offset": 0, 00:30:20.841 "data_size": 63488 00:30:20.841 }, 00:30:20.841 { 00:30:20.841 "name": "BaseBdev2", 00:30:20.841 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:20.841 "is_configured": true, 00:30:20.841 "data_offset": 2048, 00:30:20.841 "data_size": 63488 00:30:20.841 }, 00:30:20.841 { 00:30:20.841 "name": "BaseBdev3", 00:30:20.841 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:20.841 "is_configured": true, 00:30:20.841 "data_offset": 2048, 00:30:20.841 "data_size": 63488 00:30:20.841 } 00:30:20.841 ] 00:30:20.841 }' 00:30:20.841 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:20.841 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:21.100 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:21.100 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:21.100 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:21.100 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:21.100 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:21.100 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:21.100 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:21.100 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.100 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:21.100 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.100 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:21.100 "name": "raid_bdev1", 00:30:21.100 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:21.100 "strip_size_kb": 64, 00:30:21.100 "state": "online", 00:30:21.100 "raid_level": "raid5f", 00:30:21.100 "superblock": true, 00:30:21.100 "num_base_bdevs": 3, 00:30:21.100 "num_base_bdevs_discovered": 2, 00:30:21.100 "num_base_bdevs_operational": 2, 00:30:21.100 "base_bdevs_list": [ 00:30:21.100 { 00:30:21.100 "name": null, 00:30:21.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:21.100 "is_configured": false, 00:30:21.100 "data_offset": 0, 00:30:21.100 "data_size": 63488 00:30:21.100 }, 00:30:21.100 { 00:30:21.100 "name": "BaseBdev2", 00:30:21.100 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:21.100 "is_configured": true, 00:30:21.100 "data_offset": 2048, 00:30:21.100 "data_size": 63488 00:30:21.100 }, 00:30:21.100 { 00:30:21.100 "name": "BaseBdev3", 00:30:21.100 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:21.100 "is_configured": true, 00:30:21.100 "data_offset": 2048, 00:30:21.100 "data_size": 63488 00:30:21.100 } 00:30:21.100 ] 00:30:21.100 }' 00:30:21.100 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:21.100 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:21.100 13:50:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:21.359 13:50:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:21.359 13:50:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:21.359 13:50:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:30:21.359 13:50:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:21.359 13:50:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:21.359 13:50:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:21.359 13:50:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:21.359 13:50:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:21.359 13:50:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:21.359 13:50:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.359 13:50:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:21.359 [2024-11-20 13:50:24.051495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:21.359 [2024-11-20 13:50:24.051734] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:30:21.359 [2024-11-20 13:50:24.051758] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:21.359 request: 00:30:21.359 { 00:30:21.359 "base_bdev": "BaseBdev1", 00:30:21.359 "raid_bdev": "raid_bdev1", 00:30:21.359 "method": "bdev_raid_add_base_bdev", 00:30:21.359 "req_id": 1 00:30:21.359 } 00:30:21.359 Got JSON-RPC error response 00:30:21.359 response: 00:30:21.359 { 00:30:21.359 "code": -22, 00:30:21.359 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:21.359 } 00:30:21.359 13:50:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:21.359 13:50:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:30:21.359 13:50:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:21.359 13:50:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:21.359 13:50:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:21.359 13:50:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:30:22.295 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:30:22.295 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:22.295 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:22.295 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:22.295 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:22.295 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:22.295 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:22.295 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:22.295 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:22.295 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:22.295 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:22.295 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:22.295 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.295 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:22.295 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.295 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:22.295 "name": "raid_bdev1", 00:30:22.295 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:22.295 "strip_size_kb": 64, 00:30:22.295 "state": "online", 00:30:22.295 "raid_level": "raid5f", 00:30:22.295 "superblock": true, 00:30:22.295 "num_base_bdevs": 3, 00:30:22.295 "num_base_bdevs_discovered": 2, 00:30:22.295 "num_base_bdevs_operational": 2, 00:30:22.295 "base_bdevs_list": [ 00:30:22.295 { 00:30:22.295 "name": null, 00:30:22.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:22.295 "is_configured": false, 00:30:22.295 "data_offset": 0, 00:30:22.295 "data_size": 63488 00:30:22.295 }, 00:30:22.295 { 00:30:22.295 "name": "BaseBdev2", 00:30:22.295 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:22.295 "is_configured": true, 00:30:22.295 "data_offset": 2048, 00:30:22.295 "data_size": 63488 00:30:22.295 }, 00:30:22.295 { 00:30:22.295 "name": "BaseBdev3", 00:30:22.295 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:22.295 "is_configured": true, 00:30:22.295 "data_offset": 2048, 00:30:22.295 "data_size": 63488 00:30:22.295 } 00:30:22.295 ] 00:30:22.295 }' 00:30:22.295 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:22.295 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:22.862 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:22.862 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:30:22.862 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:30:22.862 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:30:22.862 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:30:22.862 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:22.862 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.862 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:22.862 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:22.862 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.862 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:30:22.862 "name": "raid_bdev1", 00:30:22.862 "uuid": "93f7d0a6-c22c-4c95-a566-d014aa3f48ef", 00:30:22.862 "strip_size_kb": 64, 00:30:22.862 "state": "online", 00:30:22.862 "raid_level": "raid5f", 00:30:22.862 "superblock": true, 00:30:22.862 "num_base_bdevs": 3, 00:30:22.862 "num_base_bdevs_discovered": 2, 00:30:22.862 "num_base_bdevs_operational": 2, 00:30:22.862 "base_bdevs_list": [ 00:30:22.862 { 00:30:22.862 "name": null, 00:30:22.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:22.862 "is_configured": false, 00:30:22.862 "data_offset": 0, 00:30:22.862 "data_size": 63488 00:30:22.862 }, 00:30:22.862 { 00:30:22.862 "name": "BaseBdev2", 00:30:22.862 "uuid": "65891948-1ae6-5a01-8c7d-85a944b07c7d", 00:30:22.862 "is_configured": true, 00:30:22.862 "data_offset": 2048, 00:30:22.862 "data_size": 63488 00:30:22.862 }, 00:30:22.862 { 00:30:22.862 "name": "BaseBdev3", 00:30:22.862 "uuid": "7d6daedf-a1a9-5d14-a56e-9952eda30732", 00:30:22.862 "is_configured": true, 00:30:22.862 "data_offset": 2048, 00:30:22.862 "data_size": 63488 00:30:22.862 } 00:30:22.862 ] 00:30:22.862 }' 00:30:22.862 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:30:22.862 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:30:22.862 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:30:22.862 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:30:22.862 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82632 00:30:22.862 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82632 ']' 00:30:22.862 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82632 00:30:22.862 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:30:22.862 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:22.862 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82632 00:30:22.863 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:22.863 killing process with pid 82632 00:30:22.863 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:22.863 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82632' 00:30:22.863 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82632 00:30:22.863 Received shutdown signal, test time was about 60.000000 seconds 00:30:22.863 00:30:22.863 Latency(us) 00:30:22.863 [2024-11-20T13:50:25.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.863 [2024-11-20T13:50:25.780Z] =================================================================================================================== 00:30:22.863 [2024-11-20T13:50:25.780Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:22.863 [2024-11-20 13:50:25.769610] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:22.863 13:50:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82632 00:30:22.863 [2024-11-20 13:50:25.769758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:22.863 [2024-11-20 13:50:25.769838] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:22.863 [2024-11-20 13:50:25.769874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:30:23.430 [2024-11-20 13:50:26.099644] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:24.367 ************************************ 00:30:24.367 END TEST raid5f_rebuild_test_sb 00:30:24.367 ************************************ 00:30:24.367 13:50:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:30:24.367 00:30:24.367 real 0m25.623s 00:30:24.367 user 0m34.191s 00:30:24.367 sys 0m2.968s 00:30:24.367 13:50:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:24.367 13:50:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:24.367 13:50:27 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:30:24.367 13:50:27 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:30:24.367 13:50:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:30:24.367 13:50:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:24.367 13:50:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:24.367 ************************************ 00:30:24.367 START TEST raid5f_state_function_test 00:30:24.367 ************************************ 00:30:24.367 13:50:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:30:24.367 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:30:24.367 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:30:24.367 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:30:24.367 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:30:24.367 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83401 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:30:24.368 Process raid pid: 83401 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83401' 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83401 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83401 ']' 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:24.368 13:50:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.369 13:50:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:24.369 13:50:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.627 [2024-11-20 13:50:27.378964] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:30:24.628 [2024-11-20 13:50:27.379365] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:24.886 [2024-11-20 13:50:27.574730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.886 [2024-11-20 13:50:27.759545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.145 [2024-11-20 13:50:28.015684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:25.145 [2024-11-20 13:50:28.016081] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:25.711 13:50:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:25.711 13:50:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:30:25.711 13:50:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:25.711 13:50:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.711 13:50:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.711 [2024-11-20 13:50:28.476580] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:25.711 [2024-11-20 13:50:28.476865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:25.711 [2024-11-20 13:50:28.476909] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:25.712 [2024-11-20 13:50:28.476933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:25.712 [2024-11-20 13:50:28.476951] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:25.712 [2024-11-20 13:50:28.476969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:25.712 [2024-11-20 13:50:28.476979] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:25.712 [2024-11-20 13:50:28.476993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:25.712 13:50:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.712 13:50:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:25.712 13:50:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:25.712 13:50:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:25.712 13:50:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:25.712 13:50:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:25.712 13:50:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:25.712 13:50:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:25.712 13:50:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:25.712 13:50:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:25.712 13:50:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:25.712 13:50:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:25.712 13:50:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.712 13:50:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.712 13:50:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:25.712 13:50:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.712 13:50:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:25.712 "name": "Existed_Raid", 00:30:25.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.712 "strip_size_kb": 64, 00:30:25.712 "state": "configuring", 00:30:25.712 "raid_level": "raid5f", 00:30:25.712 "superblock": false, 00:30:25.712 "num_base_bdevs": 4, 00:30:25.712 "num_base_bdevs_discovered": 0, 00:30:25.712 "num_base_bdevs_operational": 4, 00:30:25.712 "base_bdevs_list": [ 00:30:25.712 { 00:30:25.712 "name": "BaseBdev1", 00:30:25.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.712 "is_configured": false, 00:30:25.712 "data_offset": 0, 00:30:25.712 "data_size": 0 00:30:25.712 }, 00:30:25.712 { 00:30:25.712 "name": "BaseBdev2", 00:30:25.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.712 "is_configured": false, 00:30:25.712 "data_offset": 0, 00:30:25.712 "data_size": 0 00:30:25.712 }, 00:30:25.712 { 00:30:25.712 "name": "BaseBdev3", 00:30:25.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.712 "is_configured": false, 00:30:25.712 "data_offset": 0, 00:30:25.712 "data_size": 0 00:30:25.712 }, 00:30:25.712 { 00:30:25.712 "name": "BaseBdev4", 00:30:25.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.712 "is_configured": false, 00:30:25.712 "data_offset": 0, 00:30:25.712 "data_size": 0 00:30:25.712 } 00:30:25.712 ] 00:30:25.712 }' 00:30:25.712 13:50:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:25.712 13:50:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.279 [2024-11-20 13:50:29.056842] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:26.279 [2024-11-20 13:50:29.057197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.279 [2024-11-20 13:50:29.064841] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:26.279 [2024-11-20 13:50:29.065084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:26.279 [2024-11-20 13:50:29.065236] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:26.279 [2024-11-20 13:50:29.065431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:26.279 [2024-11-20 13:50:29.065454] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:26.279 [2024-11-20 13:50:29.065472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:26.279 [2024-11-20 13:50:29.065482] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:26.279 [2024-11-20 13:50:29.065496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.279 [2024-11-20 13:50:29.111855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:26.279 BaseBdev1 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.279 [ 00:30:26.279 { 00:30:26.279 "name": "BaseBdev1", 00:30:26.279 "aliases": [ 00:30:26.279 "f485da67-e465-45e3-9c21-f63e736321bd" 00:30:26.279 ], 00:30:26.279 "product_name": "Malloc disk", 00:30:26.279 "block_size": 512, 00:30:26.279 "num_blocks": 65536, 00:30:26.279 "uuid": "f485da67-e465-45e3-9c21-f63e736321bd", 00:30:26.279 "assigned_rate_limits": { 00:30:26.279 "rw_ios_per_sec": 0, 00:30:26.279 "rw_mbytes_per_sec": 0, 00:30:26.279 "r_mbytes_per_sec": 0, 00:30:26.279 "w_mbytes_per_sec": 0 00:30:26.279 }, 00:30:26.279 "claimed": true, 00:30:26.279 "claim_type": "exclusive_write", 00:30:26.279 "zoned": false, 00:30:26.279 "supported_io_types": { 00:30:26.279 "read": true, 00:30:26.279 "write": true, 00:30:26.279 "unmap": true, 00:30:26.279 "flush": true, 00:30:26.279 "reset": true, 00:30:26.279 "nvme_admin": false, 00:30:26.279 "nvme_io": false, 00:30:26.279 "nvme_io_md": false, 00:30:26.279 "write_zeroes": true, 00:30:26.279 "zcopy": true, 00:30:26.279 "get_zone_info": false, 00:30:26.279 "zone_management": false, 00:30:26.279 "zone_append": false, 00:30:26.279 "compare": false, 00:30:26.279 "compare_and_write": false, 00:30:26.279 "abort": true, 00:30:26.279 "seek_hole": false, 00:30:26.279 "seek_data": false, 00:30:26.279 "copy": true, 00:30:26.279 "nvme_iov_md": false 00:30:26.279 }, 00:30:26.279 "memory_domains": [ 00:30:26.279 { 00:30:26.279 "dma_device_id": "system", 00:30:26.279 "dma_device_type": 1 00:30:26.279 }, 00:30:26.279 { 00:30:26.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:26.279 "dma_device_type": 2 00:30:26.279 } 00:30:26.279 ], 00:30:26.279 "driver_specific": {} 00:30:26.279 } 00:30:26.279 ] 00:30:26.279 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.280 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:26.280 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:26.280 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:26.280 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:26.280 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:26.280 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:26.280 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:26.280 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:26.280 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:26.280 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:26.280 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:26.280 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:26.280 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:26.280 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.280 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.280 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.538 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:26.538 "name": "Existed_Raid", 00:30:26.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.538 "strip_size_kb": 64, 00:30:26.538 "state": "configuring", 00:30:26.538 "raid_level": "raid5f", 00:30:26.538 "superblock": false, 00:30:26.538 "num_base_bdevs": 4, 00:30:26.538 "num_base_bdevs_discovered": 1, 00:30:26.538 "num_base_bdevs_operational": 4, 00:30:26.538 "base_bdevs_list": [ 00:30:26.538 { 00:30:26.538 "name": "BaseBdev1", 00:30:26.538 "uuid": "f485da67-e465-45e3-9c21-f63e736321bd", 00:30:26.538 "is_configured": true, 00:30:26.538 "data_offset": 0, 00:30:26.538 "data_size": 65536 00:30:26.538 }, 00:30:26.538 { 00:30:26.538 "name": "BaseBdev2", 00:30:26.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.538 "is_configured": false, 00:30:26.538 "data_offset": 0, 00:30:26.538 "data_size": 0 00:30:26.538 }, 00:30:26.538 { 00:30:26.538 "name": "BaseBdev3", 00:30:26.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.538 "is_configured": false, 00:30:26.538 "data_offset": 0, 00:30:26.538 "data_size": 0 00:30:26.538 }, 00:30:26.538 { 00:30:26.538 "name": "BaseBdev4", 00:30:26.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.538 "is_configured": false, 00:30:26.538 "data_offset": 0, 00:30:26.538 "data_size": 0 00:30:26.538 } 00:30:26.538 ] 00:30:26.538 }' 00:30:26.538 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:26.538 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.796 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.797 [2024-11-20 13:50:29.676146] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:26.797 [2024-11-20 13:50:29.676248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.797 [2024-11-20 13:50:29.684176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:26.797 [2024-11-20 13:50:29.687553] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:26.797 [2024-11-20 13:50:29.687795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:26.797 [2024-11-20 13:50:29.687989] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:26.797 [2024-11-20 13:50:29.688036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:26.797 [2024-11-20 13:50:29.688052] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:26.797 [2024-11-20 13:50:29.688071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.797 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.055 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:27.055 "name": "Existed_Raid", 00:30:27.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.055 "strip_size_kb": 64, 00:30:27.055 "state": "configuring", 00:30:27.055 "raid_level": "raid5f", 00:30:27.055 "superblock": false, 00:30:27.055 "num_base_bdevs": 4, 00:30:27.055 "num_base_bdevs_discovered": 1, 00:30:27.055 "num_base_bdevs_operational": 4, 00:30:27.055 "base_bdevs_list": [ 00:30:27.055 { 00:30:27.055 "name": "BaseBdev1", 00:30:27.055 "uuid": "f485da67-e465-45e3-9c21-f63e736321bd", 00:30:27.055 "is_configured": true, 00:30:27.055 "data_offset": 0, 00:30:27.055 "data_size": 65536 00:30:27.055 }, 00:30:27.055 { 00:30:27.055 "name": "BaseBdev2", 00:30:27.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.055 "is_configured": false, 00:30:27.055 "data_offset": 0, 00:30:27.055 "data_size": 0 00:30:27.055 }, 00:30:27.055 { 00:30:27.055 "name": "BaseBdev3", 00:30:27.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.056 "is_configured": false, 00:30:27.056 "data_offset": 0, 00:30:27.056 "data_size": 0 00:30:27.056 }, 00:30:27.056 { 00:30:27.056 "name": "BaseBdev4", 00:30:27.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.056 "is_configured": false, 00:30:27.056 "data_offset": 0, 00:30:27.056 "data_size": 0 00:30:27.056 } 00:30:27.056 ] 00:30:27.056 }' 00:30:27.056 13:50:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:27.056 13:50:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.314 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:27.314 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.314 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.573 [2024-11-20 13:50:30.262360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:27.573 BaseBdev2 00:30:27.573 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.573 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:30:27.573 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:30:27.573 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:27.573 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:27.573 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:27.573 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.574 [ 00:30:27.574 { 00:30:27.574 "name": "BaseBdev2", 00:30:27.574 "aliases": [ 00:30:27.574 "ed08e86a-f321-46b4-8c91-55b3b488fee7" 00:30:27.574 ], 00:30:27.574 "product_name": "Malloc disk", 00:30:27.574 "block_size": 512, 00:30:27.574 "num_blocks": 65536, 00:30:27.574 "uuid": "ed08e86a-f321-46b4-8c91-55b3b488fee7", 00:30:27.574 "assigned_rate_limits": { 00:30:27.574 "rw_ios_per_sec": 0, 00:30:27.574 "rw_mbytes_per_sec": 0, 00:30:27.574 "r_mbytes_per_sec": 0, 00:30:27.574 "w_mbytes_per_sec": 0 00:30:27.574 }, 00:30:27.574 "claimed": true, 00:30:27.574 "claim_type": "exclusive_write", 00:30:27.574 "zoned": false, 00:30:27.574 "supported_io_types": { 00:30:27.574 "read": true, 00:30:27.574 "write": true, 00:30:27.574 "unmap": true, 00:30:27.574 "flush": true, 00:30:27.574 "reset": true, 00:30:27.574 "nvme_admin": false, 00:30:27.574 "nvme_io": false, 00:30:27.574 "nvme_io_md": false, 00:30:27.574 "write_zeroes": true, 00:30:27.574 "zcopy": true, 00:30:27.574 "get_zone_info": false, 00:30:27.574 "zone_management": false, 00:30:27.574 "zone_append": false, 00:30:27.574 "compare": false, 00:30:27.574 "compare_and_write": false, 00:30:27.574 "abort": true, 00:30:27.574 "seek_hole": false, 00:30:27.574 "seek_data": false, 00:30:27.574 "copy": true, 00:30:27.574 "nvme_iov_md": false 00:30:27.574 }, 00:30:27.574 "memory_domains": [ 00:30:27.574 { 00:30:27.574 "dma_device_id": "system", 00:30:27.574 "dma_device_type": 1 00:30:27.574 }, 00:30:27.574 { 00:30:27.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:27.574 "dma_device_type": 2 00:30:27.574 } 00:30:27.574 ], 00:30:27.574 "driver_specific": {} 00:30:27.574 } 00:30:27.574 ] 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:27.574 "name": "Existed_Raid", 00:30:27.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.574 "strip_size_kb": 64, 00:30:27.574 "state": "configuring", 00:30:27.574 "raid_level": "raid5f", 00:30:27.574 "superblock": false, 00:30:27.574 "num_base_bdevs": 4, 00:30:27.574 "num_base_bdevs_discovered": 2, 00:30:27.574 "num_base_bdevs_operational": 4, 00:30:27.574 "base_bdevs_list": [ 00:30:27.574 { 00:30:27.574 "name": "BaseBdev1", 00:30:27.574 "uuid": "f485da67-e465-45e3-9c21-f63e736321bd", 00:30:27.574 "is_configured": true, 00:30:27.574 "data_offset": 0, 00:30:27.574 "data_size": 65536 00:30:27.574 }, 00:30:27.574 { 00:30:27.574 "name": "BaseBdev2", 00:30:27.574 "uuid": "ed08e86a-f321-46b4-8c91-55b3b488fee7", 00:30:27.574 "is_configured": true, 00:30:27.574 "data_offset": 0, 00:30:27.574 "data_size": 65536 00:30:27.574 }, 00:30:27.574 { 00:30:27.574 "name": "BaseBdev3", 00:30:27.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.574 "is_configured": false, 00:30:27.574 "data_offset": 0, 00:30:27.574 "data_size": 0 00:30:27.574 }, 00:30:27.574 { 00:30:27.574 "name": "BaseBdev4", 00:30:27.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.574 "is_configured": false, 00:30:27.574 "data_offset": 0, 00:30:27.574 "data_size": 0 00:30:27.574 } 00:30:27.574 ] 00:30:27.574 }' 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:27.574 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.142 [2024-11-20 13:50:30.875929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:28.142 BaseBdev3 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.142 [ 00:30:28.142 { 00:30:28.142 "name": "BaseBdev3", 00:30:28.142 "aliases": [ 00:30:28.142 "24e19e24-2195-487f-890e-b9d478428093" 00:30:28.142 ], 00:30:28.142 "product_name": "Malloc disk", 00:30:28.142 "block_size": 512, 00:30:28.142 "num_blocks": 65536, 00:30:28.142 "uuid": "24e19e24-2195-487f-890e-b9d478428093", 00:30:28.142 "assigned_rate_limits": { 00:30:28.142 "rw_ios_per_sec": 0, 00:30:28.142 "rw_mbytes_per_sec": 0, 00:30:28.142 "r_mbytes_per_sec": 0, 00:30:28.142 "w_mbytes_per_sec": 0 00:30:28.142 }, 00:30:28.142 "claimed": true, 00:30:28.142 "claim_type": "exclusive_write", 00:30:28.142 "zoned": false, 00:30:28.142 "supported_io_types": { 00:30:28.142 "read": true, 00:30:28.142 "write": true, 00:30:28.142 "unmap": true, 00:30:28.142 "flush": true, 00:30:28.142 "reset": true, 00:30:28.142 "nvme_admin": false, 00:30:28.142 "nvme_io": false, 00:30:28.142 "nvme_io_md": false, 00:30:28.142 "write_zeroes": true, 00:30:28.142 "zcopy": true, 00:30:28.142 "get_zone_info": false, 00:30:28.142 "zone_management": false, 00:30:28.142 "zone_append": false, 00:30:28.142 "compare": false, 00:30:28.142 "compare_and_write": false, 00:30:28.142 "abort": true, 00:30:28.142 "seek_hole": false, 00:30:28.142 "seek_data": false, 00:30:28.142 "copy": true, 00:30:28.142 "nvme_iov_md": false 00:30:28.142 }, 00:30:28.142 "memory_domains": [ 00:30:28.142 { 00:30:28.142 "dma_device_id": "system", 00:30:28.142 "dma_device_type": 1 00:30:28.142 }, 00:30:28.142 { 00:30:28.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:28.142 "dma_device_type": 2 00:30:28.142 } 00:30:28.142 ], 00:30:28.142 "driver_specific": {} 00:30:28.142 } 00:30:28.142 ] 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:28.142 "name": "Existed_Raid", 00:30:28.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:28.142 "strip_size_kb": 64, 00:30:28.142 "state": "configuring", 00:30:28.142 "raid_level": "raid5f", 00:30:28.142 "superblock": false, 00:30:28.142 "num_base_bdevs": 4, 00:30:28.142 "num_base_bdevs_discovered": 3, 00:30:28.142 "num_base_bdevs_operational": 4, 00:30:28.142 "base_bdevs_list": [ 00:30:28.142 { 00:30:28.142 "name": "BaseBdev1", 00:30:28.142 "uuid": "f485da67-e465-45e3-9c21-f63e736321bd", 00:30:28.142 "is_configured": true, 00:30:28.142 "data_offset": 0, 00:30:28.142 "data_size": 65536 00:30:28.142 }, 00:30:28.142 { 00:30:28.142 "name": "BaseBdev2", 00:30:28.142 "uuid": "ed08e86a-f321-46b4-8c91-55b3b488fee7", 00:30:28.142 "is_configured": true, 00:30:28.142 "data_offset": 0, 00:30:28.142 "data_size": 65536 00:30:28.142 }, 00:30:28.142 { 00:30:28.142 "name": "BaseBdev3", 00:30:28.142 "uuid": "24e19e24-2195-487f-890e-b9d478428093", 00:30:28.142 "is_configured": true, 00:30:28.142 "data_offset": 0, 00:30:28.142 "data_size": 65536 00:30:28.142 }, 00:30:28.142 { 00:30:28.142 "name": "BaseBdev4", 00:30:28.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:28.142 "is_configured": false, 00:30:28.142 "data_offset": 0, 00:30:28.142 "data_size": 0 00:30:28.142 } 00:30:28.142 ] 00:30:28.142 }' 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:28.142 13:50:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.711 13:50:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:30:28.711 13:50:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.711 13:50:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.711 [2024-11-20 13:50:31.485306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:28.711 [2024-11-20 13:50:31.485435] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:30:28.711 [2024-11-20 13:50:31.485451] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:30:28.711 [2024-11-20 13:50:31.485813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:30:28.711 [2024-11-20 13:50:31.492739] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:30:28.711 BaseBdev4 00:30:28.711 [2024-11-20 13:50:31.492991] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:30:28.711 [2024-11-20 13:50:31.493385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:28.711 13:50:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.711 13:50:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:30:28.711 13:50:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:30:28.711 13:50:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:28.711 13:50:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:28.711 13:50:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:28.711 13:50:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:28.711 13:50:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:28.711 13:50:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.711 13:50:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.711 13:50:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.711 13:50:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:30:28.711 13:50:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.711 13:50:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.711 [ 00:30:28.711 { 00:30:28.711 "name": "BaseBdev4", 00:30:28.711 "aliases": [ 00:30:28.711 "6237c97f-d495-40ec-abcd-fa446445eff8" 00:30:28.711 ], 00:30:28.711 "product_name": "Malloc disk", 00:30:28.711 "block_size": 512, 00:30:28.711 "num_blocks": 65536, 00:30:28.711 "uuid": "6237c97f-d495-40ec-abcd-fa446445eff8", 00:30:28.711 "assigned_rate_limits": { 00:30:28.711 "rw_ios_per_sec": 0, 00:30:28.711 "rw_mbytes_per_sec": 0, 00:30:28.711 "r_mbytes_per_sec": 0, 00:30:28.711 "w_mbytes_per_sec": 0 00:30:28.711 }, 00:30:28.711 "claimed": true, 00:30:28.711 "claim_type": "exclusive_write", 00:30:28.711 "zoned": false, 00:30:28.711 "supported_io_types": { 00:30:28.711 "read": true, 00:30:28.711 "write": true, 00:30:28.711 "unmap": true, 00:30:28.711 "flush": true, 00:30:28.711 "reset": true, 00:30:28.711 "nvme_admin": false, 00:30:28.711 "nvme_io": false, 00:30:28.711 "nvme_io_md": false, 00:30:28.711 "write_zeroes": true, 00:30:28.711 "zcopy": true, 00:30:28.711 "get_zone_info": false, 00:30:28.711 "zone_management": false, 00:30:28.711 "zone_append": false, 00:30:28.711 "compare": false, 00:30:28.712 "compare_and_write": false, 00:30:28.712 "abort": true, 00:30:28.712 "seek_hole": false, 00:30:28.712 "seek_data": false, 00:30:28.712 "copy": true, 00:30:28.712 "nvme_iov_md": false 00:30:28.712 }, 00:30:28.712 "memory_domains": [ 00:30:28.712 { 00:30:28.712 "dma_device_id": "system", 00:30:28.712 "dma_device_type": 1 00:30:28.712 }, 00:30:28.712 { 00:30:28.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:28.712 "dma_device_type": 2 00:30:28.712 } 00:30:28.712 ], 00:30:28.712 "driver_specific": {} 00:30:28.712 } 00:30:28.712 ] 00:30:28.712 13:50:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.712 13:50:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:28.712 13:50:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:28.712 13:50:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:28.712 13:50:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:30:28.712 13:50:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:28.712 13:50:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:28.712 13:50:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:28.712 13:50:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:28.712 13:50:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:28.712 13:50:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:28.712 13:50:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:28.712 13:50:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:28.712 13:50:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:28.712 13:50:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:28.712 13:50:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:28.712 13:50:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.712 13:50:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.712 13:50:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.712 13:50:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:28.712 "name": "Existed_Raid", 00:30:28.712 "uuid": "c01da9a3-2cf3-4dec-8f92-ca98f138a547", 00:30:28.712 "strip_size_kb": 64, 00:30:28.712 "state": "online", 00:30:28.712 "raid_level": "raid5f", 00:30:28.712 "superblock": false, 00:30:28.712 "num_base_bdevs": 4, 00:30:28.712 "num_base_bdevs_discovered": 4, 00:30:28.712 "num_base_bdevs_operational": 4, 00:30:28.712 "base_bdevs_list": [ 00:30:28.712 { 00:30:28.712 "name": "BaseBdev1", 00:30:28.712 "uuid": "f485da67-e465-45e3-9c21-f63e736321bd", 00:30:28.712 "is_configured": true, 00:30:28.712 "data_offset": 0, 00:30:28.712 "data_size": 65536 00:30:28.712 }, 00:30:28.712 { 00:30:28.712 "name": "BaseBdev2", 00:30:28.712 "uuid": "ed08e86a-f321-46b4-8c91-55b3b488fee7", 00:30:28.712 "is_configured": true, 00:30:28.712 "data_offset": 0, 00:30:28.712 "data_size": 65536 00:30:28.712 }, 00:30:28.712 { 00:30:28.712 "name": "BaseBdev3", 00:30:28.712 "uuid": "24e19e24-2195-487f-890e-b9d478428093", 00:30:28.712 "is_configured": true, 00:30:28.712 "data_offset": 0, 00:30:28.712 "data_size": 65536 00:30:28.712 }, 00:30:28.712 { 00:30:28.712 "name": "BaseBdev4", 00:30:28.712 "uuid": "6237c97f-d495-40ec-abcd-fa446445eff8", 00:30:28.712 "is_configured": true, 00:30:28.712 "data_offset": 0, 00:30:28.712 "data_size": 65536 00:30:28.712 } 00:30:28.712 ] 00:30:28.712 }' 00:30:28.712 13:50:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:28.712 13:50:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.280 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:30:29.280 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:29.280 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:29.280 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:29.280 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:29.280 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:29.280 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:29.280 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:29.280 13:50:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.280 13:50:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.280 [2024-11-20 13:50:32.057597] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:29.280 13:50:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.280 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:29.280 "name": "Existed_Raid", 00:30:29.280 "aliases": [ 00:30:29.280 "c01da9a3-2cf3-4dec-8f92-ca98f138a547" 00:30:29.280 ], 00:30:29.280 "product_name": "Raid Volume", 00:30:29.280 "block_size": 512, 00:30:29.280 "num_blocks": 196608, 00:30:29.280 "uuid": "c01da9a3-2cf3-4dec-8f92-ca98f138a547", 00:30:29.280 "assigned_rate_limits": { 00:30:29.280 "rw_ios_per_sec": 0, 00:30:29.280 "rw_mbytes_per_sec": 0, 00:30:29.280 "r_mbytes_per_sec": 0, 00:30:29.280 "w_mbytes_per_sec": 0 00:30:29.280 }, 00:30:29.280 "claimed": false, 00:30:29.280 "zoned": false, 00:30:29.280 "supported_io_types": { 00:30:29.280 "read": true, 00:30:29.280 "write": true, 00:30:29.280 "unmap": false, 00:30:29.280 "flush": false, 00:30:29.280 "reset": true, 00:30:29.280 "nvme_admin": false, 00:30:29.280 "nvme_io": false, 00:30:29.280 "nvme_io_md": false, 00:30:29.280 "write_zeroes": true, 00:30:29.280 "zcopy": false, 00:30:29.280 "get_zone_info": false, 00:30:29.280 "zone_management": false, 00:30:29.280 "zone_append": false, 00:30:29.280 "compare": false, 00:30:29.280 "compare_and_write": false, 00:30:29.280 "abort": false, 00:30:29.280 "seek_hole": false, 00:30:29.280 "seek_data": false, 00:30:29.280 "copy": false, 00:30:29.280 "nvme_iov_md": false 00:30:29.280 }, 00:30:29.280 "driver_specific": { 00:30:29.280 "raid": { 00:30:29.280 "uuid": "c01da9a3-2cf3-4dec-8f92-ca98f138a547", 00:30:29.280 "strip_size_kb": 64, 00:30:29.280 "state": "online", 00:30:29.280 "raid_level": "raid5f", 00:30:29.280 "superblock": false, 00:30:29.280 "num_base_bdevs": 4, 00:30:29.280 "num_base_bdevs_discovered": 4, 00:30:29.280 "num_base_bdevs_operational": 4, 00:30:29.280 "base_bdevs_list": [ 00:30:29.280 { 00:30:29.280 "name": "BaseBdev1", 00:30:29.280 "uuid": "f485da67-e465-45e3-9c21-f63e736321bd", 00:30:29.280 "is_configured": true, 00:30:29.280 "data_offset": 0, 00:30:29.280 "data_size": 65536 00:30:29.280 }, 00:30:29.280 { 00:30:29.280 "name": "BaseBdev2", 00:30:29.280 "uuid": "ed08e86a-f321-46b4-8c91-55b3b488fee7", 00:30:29.280 "is_configured": true, 00:30:29.280 "data_offset": 0, 00:30:29.280 "data_size": 65536 00:30:29.280 }, 00:30:29.280 { 00:30:29.280 "name": "BaseBdev3", 00:30:29.280 "uuid": "24e19e24-2195-487f-890e-b9d478428093", 00:30:29.280 "is_configured": true, 00:30:29.280 "data_offset": 0, 00:30:29.280 "data_size": 65536 00:30:29.280 }, 00:30:29.280 { 00:30:29.280 "name": "BaseBdev4", 00:30:29.280 "uuid": "6237c97f-d495-40ec-abcd-fa446445eff8", 00:30:29.280 "is_configured": true, 00:30:29.280 "data_offset": 0, 00:30:29.280 "data_size": 65536 00:30:29.280 } 00:30:29.280 ] 00:30:29.280 } 00:30:29.280 } 00:30:29.280 }' 00:30:29.280 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:29.280 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:30:29.280 BaseBdev2 00:30:29.280 BaseBdev3 00:30:29.280 BaseBdev4' 00:30:29.280 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.539 13:50:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.539 [2024-11-20 13:50:32.425562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:29.798 13:50:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.798 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:30:29.798 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:30:29.798 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:29.798 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:30:29.798 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:30:29.798 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:30:29.798 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:29.798 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:29.798 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:29.798 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:29.798 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:29.798 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:29.798 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:29.798 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:29.798 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:29.798 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:29.798 13:50:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.798 13:50:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.798 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:29.798 13:50:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.798 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:29.798 "name": "Existed_Raid", 00:30:29.798 "uuid": "c01da9a3-2cf3-4dec-8f92-ca98f138a547", 00:30:29.798 "strip_size_kb": 64, 00:30:29.798 "state": "online", 00:30:29.798 "raid_level": "raid5f", 00:30:29.798 "superblock": false, 00:30:29.798 "num_base_bdevs": 4, 00:30:29.798 "num_base_bdevs_discovered": 3, 00:30:29.798 "num_base_bdevs_operational": 3, 00:30:29.798 "base_bdevs_list": [ 00:30:29.798 { 00:30:29.798 "name": null, 00:30:29.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.798 "is_configured": false, 00:30:29.798 "data_offset": 0, 00:30:29.798 "data_size": 65536 00:30:29.798 }, 00:30:29.798 { 00:30:29.798 "name": "BaseBdev2", 00:30:29.798 "uuid": "ed08e86a-f321-46b4-8c91-55b3b488fee7", 00:30:29.798 "is_configured": true, 00:30:29.799 "data_offset": 0, 00:30:29.799 "data_size": 65536 00:30:29.799 }, 00:30:29.799 { 00:30:29.799 "name": "BaseBdev3", 00:30:29.799 "uuid": "24e19e24-2195-487f-890e-b9d478428093", 00:30:29.799 "is_configured": true, 00:30:29.799 "data_offset": 0, 00:30:29.799 "data_size": 65536 00:30:29.799 }, 00:30:29.799 { 00:30:29.799 "name": "BaseBdev4", 00:30:29.799 "uuid": "6237c97f-d495-40ec-abcd-fa446445eff8", 00:30:29.799 "is_configured": true, 00:30:29.799 "data_offset": 0, 00:30:29.799 "data_size": 65536 00:30:29.799 } 00:30:29.799 ] 00:30:29.799 }' 00:30:29.799 13:50:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:29.799 13:50:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.366 [2024-11-20 13:50:33.113364] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:30.366 [2024-11-20 13:50:33.113671] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:30.366 [2024-11-20 13:50:33.197150] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.366 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.366 [2024-11-20 13:50:33.257210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:30.625 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.625 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:30.625 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:30.625 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.625 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.625 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.625 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:30.625 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.625 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:30.625 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:30.625 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:30:30.625 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.625 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.625 [2024-11-20 13:50:33.401768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:30:30.625 [2024-11-20 13:50:33.401976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:30:30.625 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.625 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:30.625 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:30.625 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.625 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:30:30.625 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.625 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.625 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.884 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:30:30.884 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:30:30.884 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:30:30.884 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:30:30.884 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:30.884 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:30.884 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.884 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.884 BaseBdev2 00:30:30.884 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.884 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:30:30.884 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.885 [ 00:30:30.885 { 00:30:30.885 "name": "BaseBdev2", 00:30:30.885 "aliases": [ 00:30:30.885 "c28ffde1-c8a9-41f6-88a3-a020bc5dad1c" 00:30:30.885 ], 00:30:30.885 "product_name": "Malloc disk", 00:30:30.885 "block_size": 512, 00:30:30.885 "num_blocks": 65536, 00:30:30.885 "uuid": "c28ffde1-c8a9-41f6-88a3-a020bc5dad1c", 00:30:30.885 "assigned_rate_limits": { 00:30:30.885 "rw_ios_per_sec": 0, 00:30:30.885 "rw_mbytes_per_sec": 0, 00:30:30.885 "r_mbytes_per_sec": 0, 00:30:30.885 "w_mbytes_per_sec": 0 00:30:30.885 }, 00:30:30.885 "claimed": false, 00:30:30.885 "zoned": false, 00:30:30.885 "supported_io_types": { 00:30:30.885 "read": true, 00:30:30.885 "write": true, 00:30:30.885 "unmap": true, 00:30:30.885 "flush": true, 00:30:30.885 "reset": true, 00:30:30.885 "nvme_admin": false, 00:30:30.885 "nvme_io": false, 00:30:30.885 "nvme_io_md": false, 00:30:30.885 "write_zeroes": true, 00:30:30.885 "zcopy": true, 00:30:30.885 "get_zone_info": false, 00:30:30.885 "zone_management": false, 00:30:30.885 "zone_append": false, 00:30:30.885 "compare": false, 00:30:30.885 "compare_and_write": false, 00:30:30.885 "abort": true, 00:30:30.885 "seek_hole": false, 00:30:30.885 "seek_data": false, 00:30:30.885 "copy": true, 00:30:30.885 "nvme_iov_md": false 00:30:30.885 }, 00:30:30.885 "memory_domains": [ 00:30:30.885 { 00:30:30.885 "dma_device_id": "system", 00:30:30.885 "dma_device_type": 1 00:30:30.885 }, 00:30:30.885 { 00:30:30.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:30.885 "dma_device_type": 2 00:30:30.885 } 00:30:30.885 ], 00:30:30.885 "driver_specific": {} 00:30:30.885 } 00:30:30.885 ] 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.885 BaseBdev3 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.885 [ 00:30:30.885 { 00:30:30.885 "name": "BaseBdev3", 00:30:30.885 "aliases": [ 00:30:30.885 "c86d62c4-6621-4bb2-bbb6-b8a0037f4107" 00:30:30.885 ], 00:30:30.885 "product_name": "Malloc disk", 00:30:30.885 "block_size": 512, 00:30:30.885 "num_blocks": 65536, 00:30:30.885 "uuid": "c86d62c4-6621-4bb2-bbb6-b8a0037f4107", 00:30:30.885 "assigned_rate_limits": { 00:30:30.885 "rw_ios_per_sec": 0, 00:30:30.885 "rw_mbytes_per_sec": 0, 00:30:30.885 "r_mbytes_per_sec": 0, 00:30:30.885 "w_mbytes_per_sec": 0 00:30:30.885 }, 00:30:30.885 "claimed": false, 00:30:30.885 "zoned": false, 00:30:30.885 "supported_io_types": { 00:30:30.885 "read": true, 00:30:30.885 "write": true, 00:30:30.885 "unmap": true, 00:30:30.885 "flush": true, 00:30:30.885 "reset": true, 00:30:30.885 "nvme_admin": false, 00:30:30.885 "nvme_io": false, 00:30:30.885 "nvme_io_md": false, 00:30:30.885 "write_zeroes": true, 00:30:30.885 "zcopy": true, 00:30:30.885 "get_zone_info": false, 00:30:30.885 "zone_management": false, 00:30:30.885 "zone_append": false, 00:30:30.885 "compare": false, 00:30:30.885 "compare_and_write": false, 00:30:30.885 "abort": true, 00:30:30.885 "seek_hole": false, 00:30:30.885 "seek_data": false, 00:30:30.885 "copy": true, 00:30:30.885 "nvme_iov_md": false 00:30:30.885 }, 00:30:30.885 "memory_domains": [ 00:30:30.885 { 00:30:30.885 "dma_device_id": "system", 00:30:30.885 "dma_device_type": 1 00:30:30.885 }, 00:30:30.885 { 00:30:30.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:30.885 "dma_device_type": 2 00:30:30.885 } 00:30:30.885 ], 00:30:30.885 "driver_specific": {} 00:30:30.885 } 00:30:30.885 ] 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.885 BaseBdev4 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.885 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.885 [ 00:30:30.885 { 00:30:30.885 "name": "BaseBdev4", 00:30:30.885 "aliases": [ 00:30:30.885 "a77947ef-3041-4b68-80c2-02cb5166e82c" 00:30:30.885 ], 00:30:30.885 "product_name": "Malloc disk", 00:30:30.885 "block_size": 512, 00:30:30.885 "num_blocks": 65536, 00:30:30.885 "uuid": "a77947ef-3041-4b68-80c2-02cb5166e82c", 00:30:30.885 "assigned_rate_limits": { 00:30:30.885 "rw_ios_per_sec": 0, 00:30:30.885 "rw_mbytes_per_sec": 0, 00:30:30.885 "r_mbytes_per_sec": 0, 00:30:30.885 "w_mbytes_per_sec": 0 00:30:30.885 }, 00:30:30.885 "claimed": false, 00:30:30.886 "zoned": false, 00:30:30.886 "supported_io_types": { 00:30:30.886 "read": true, 00:30:30.886 "write": true, 00:30:30.886 "unmap": true, 00:30:30.886 "flush": true, 00:30:30.886 "reset": true, 00:30:30.886 "nvme_admin": false, 00:30:30.886 "nvme_io": false, 00:30:30.886 "nvme_io_md": false, 00:30:30.886 "write_zeroes": true, 00:30:30.886 "zcopy": true, 00:30:30.886 "get_zone_info": false, 00:30:30.886 "zone_management": false, 00:30:30.886 "zone_append": false, 00:30:30.886 "compare": false, 00:30:30.886 "compare_and_write": false, 00:30:30.886 "abort": true, 00:30:30.886 "seek_hole": false, 00:30:30.886 "seek_data": false, 00:30:30.886 "copy": true, 00:30:30.886 "nvme_iov_md": false 00:30:30.886 }, 00:30:30.886 "memory_domains": [ 00:30:30.886 { 00:30:30.886 "dma_device_id": "system", 00:30:30.886 "dma_device_type": 1 00:30:30.886 }, 00:30:30.886 { 00:30:30.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:30.886 "dma_device_type": 2 00:30:30.886 } 00:30:30.886 ], 00:30:30.886 "driver_specific": {} 00:30:30.886 } 00:30:30.886 ] 00:30:30.886 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.886 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:30.886 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:30.886 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:30.886 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:30.886 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.886 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.886 [2024-11-20 13:50:33.768199] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:30.886 [2024-11-20 13:50:33.768427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:30.886 [2024-11-20 13:50:33.768581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:30.886 [2024-11-20 13:50:33.771106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:30.886 [2024-11-20 13:50:33.771329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:30.886 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.886 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:30.886 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:30.886 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:30.886 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:30.886 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:30.886 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:30.886 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:30.886 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:30.886 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:30.886 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:30.886 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.886 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:30.886 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.886 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.886 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.145 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:31.145 "name": "Existed_Raid", 00:30:31.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:31.145 "strip_size_kb": 64, 00:30:31.145 "state": "configuring", 00:30:31.145 "raid_level": "raid5f", 00:30:31.145 "superblock": false, 00:30:31.145 "num_base_bdevs": 4, 00:30:31.145 "num_base_bdevs_discovered": 3, 00:30:31.145 "num_base_bdevs_operational": 4, 00:30:31.145 "base_bdevs_list": [ 00:30:31.145 { 00:30:31.145 "name": "BaseBdev1", 00:30:31.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:31.145 "is_configured": false, 00:30:31.145 "data_offset": 0, 00:30:31.145 "data_size": 0 00:30:31.145 }, 00:30:31.145 { 00:30:31.145 "name": "BaseBdev2", 00:30:31.145 "uuid": "c28ffde1-c8a9-41f6-88a3-a020bc5dad1c", 00:30:31.145 "is_configured": true, 00:30:31.145 "data_offset": 0, 00:30:31.145 "data_size": 65536 00:30:31.145 }, 00:30:31.145 { 00:30:31.145 "name": "BaseBdev3", 00:30:31.145 "uuid": "c86d62c4-6621-4bb2-bbb6-b8a0037f4107", 00:30:31.145 "is_configured": true, 00:30:31.145 "data_offset": 0, 00:30:31.145 "data_size": 65536 00:30:31.145 }, 00:30:31.145 { 00:30:31.145 "name": "BaseBdev4", 00:30:31.145 "uuid": "a77947ef-3041-4b68-80c2-02cb5166e82c", 00:30:31.145 "is_configured": true, 00:30:31.145 "data_offset": 0, 00:30:31.145 "data_size": 65536 00:30:31.145 } 00:30:31.145 ] 00:30:31.145 }' 00:30:31.145 13:50:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:31.145 13:50:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.404 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:31.404 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.404 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.404 [2024-11-20 13:50:34.304655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:31.404 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.404 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:31.404 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:31.404 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:31.404 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:31.404 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:31.404 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:31.404 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:31.404 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:31.404 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:31.404 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:31.404 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:31.404 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:31.404 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.404 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.663 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.663 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:31.663 "name": "Existed_Raid", 00:30:31.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:31.663 "strip_size_kb": 64, 00:30:31.663 "state": "configuring", 00:30:31.663 "raid_level": "raid5f", 00:30:31.663 "superblock": false, 00:30:31.663 "num_base_bdevs": 4, 00:30:31.663 "num_base_bdevs_discovered": 2, 00:30:31.663 "num_base_bdevs_operational": 4, 00:30:31.663 "base_bdevs_list": [ 00:30:31.663 { 00:30:31.663 "name": "BaseBdev1", 00:30:31.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:31.663 "is_configured": false, 00:30:31.663 "data_offset": 0, 00:30:31.663 "data_size": 0 00:30:31.663 }, 00:30:31.663 { 00:30:31.663 "name": null, 00:30:31.663 "uuid": "c28ffde1-c8a9-41f6-88a3-a020bc5dad1c", 00:30:31.663 "is_configured": false, 00:30:31.663 "data_offset": 0, 00:30:31.663 "data_size": 65536 00:30:31.663 }, 00:30:31.663 { 00:30:31.663 "name": "BaseBdev3", 00:30:31.663 "uuid": "c86d62c4-6621-4bb2-bbb6-b8a0037f4107", 00:30:31.663 "is_configured": true, 00:30:31.663 "data_offset": 0, 00:30:31.663 "data_size": 65536 00:30:31.663 }, 00:30:31.663 { 00:30:31.663 "name": "BaseBdev4", 00:30:31.663 "uuid": "a77947ef-3041-4b68-80c2-02cb5166e82c", 00:30:31.663 "is_configured": true, 00:30:31.663 "data_offset": 0, 00:30:31.663 "data_size": 65536 00:30:31.663 } 00:30:31.663 ] 00:30:31.663 }' 00:30:31.663 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:31.663 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.230 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:32.230 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:32.230 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.230 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.230 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.230 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:30:32.230 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:32.230 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.230 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.230 [2024-11-20 13:50:34.928690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:32.231 BaseBdev1 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.231 [ 00:30:32.231 { 00:30:32.231 "name": "BaseBdev1", 00:30:32.231 "aliases": [ 00:30:32.231 "9105078f-53db-449a-b718-37053c0824cc" 00:30:32.231 ], 00:30:32.231 "product_name": "Malloc disk", 00:30:32.231 "block_size": 512, 00:30:32.231 "num_blocks": 65536, 00:30:32.231 "uuid": "9105078f-53db-449a-b718-37053c0824cc", 00:30:32.231 "assigned_rate_limits": { 00:30:32.231 "rw_ios_per_sec": 0, 00:30:32.231 "rw_mbytes_per_sec": 0, 00:30:32.231 "r_mbytes_per_sec": 0, 00:30:32.231 "w_mbytes_per_sec": 0 00:30:32.231 }, 00:30:32.231 "claimed": true, 00:30:32.231 "claim_type": "exclusive_write", 00:30:32.231 "zoned": false, 00:30:32.231 "supported_io_types": { 00:30:32.231 "read": true, 00:30:32.231 "write": true, 00:30:32.231 "unmap": true, 00:30:32.231 "flush": true, 00:30:32.231 "reset": true, 00:30:32.231 "nvme_admin": false, 00:30:32.231 "nvme_io": false, 00:30:32.231 "nvme_io_md": false, 00:30:32.231 "write_zeroes": true, 00:30:32.231 "zcopy": true, 00:30:32.231 "get_zone_info": false, 00:30:32.231 "zone_management": false, 00:30:32.231 "zone_append": false, 00:30:32.231 "compare": false, 00:30:32.231 "compare_and_write": false, 00:30:32.231 "abort": true, 00:30:32.231 "seek_hole": false, 00:30:32.231 "seek_data": false, 00:30:32.231 "copy": true, 00:30:32.231 "nvme_iov_md": false 00:30:32.231 }, 00:30:32.231 "memory_domains": [ 00:30:32.231 { 00:30:32.231 "dma_device_id": "system", 00:30:32.231 "dma_device_type": 1 00:30:32.231 }, 00:30:32.231 { 00:30:32.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:32.231 "dma_device_type": 2 00:30:32.231 } 00:30:32.231 ], 00:30:32.231 "driver_specific": {} 00:30:32.231 } 00:30:32.231 ] 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.231 13:50:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.231 13:50:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:32.231 "name": "Existed_Raid", 00:30:32.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.231 "strip_size_kb": 64, 00:30:32.231 "state": "configuring", 00:30:32.231 "raid_level": "raid5f", 00:30:32.231 "superblock": false, 00:30:32.231 "num_base_bdevs": 4, 00:30:32.231 "num_base_bdevs_discovered": 3, 00:30:32.231 "num_base_bdevs_operational": 4, 00:30:32.231 "base_bdevs_list": [ 00:30:32.231 { 00:30:32.231 "name": "BaseBdev1", 00:30:32.231 "uuid": "9105078f-53db-449a-b718-37053c0824cc", 00:30:32.231 "is_configured": true, 00:30:32.231 "data_offset": 0, 00:30:32.231 "data_size": 65536 00:30:32.231 }, 00:30:32.231 { 00:30:32.231 "name": null, 00:30:32.231 "uuid": "c28ffde1-c8a9-41f6-88a3-a020bc5dad1c", 00:30:32.231 "is_configured": false, 00:30:32.231 "data_offset": 0, 00:30:32.231 "data_size": 65536 00:30:32.231 }, 00:30:32.231 { 00:30:32.231 "name": "BaseBdev3", 00:30:32.231 "uuid": "c86d62c4-6621-4bb2-bbb6-b8a0037f4107", 00:30:32.231 "is_configured": true, 00:30:32.231 "data_offset": 0, 00:30:32.231 "data_size": 65536 00:30:32.231 }, 00:30:32.231 { 00:30:32.231 "name": "BaseBdev4", 00:30:32.231 "uuid": "a77947ef-3041-4b68-80c2-02cb5166e82c", 00:30:32.231 "is_configured": true, 00:30:32.231 "data_offset": 0, 00:30:32.231 "data_size": 65536 00:30:32.231 } 00:30:32.231 ] 00:30:32.231 }' 00:30:32.231 13:50:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:32.231 13:50:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.800 [2024-11-20 13:50:35.561014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:32.800 "name": "Existed_Raid", 00:30:32.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.800 "strip_size_kb": 64, 00:30:32.800 "state": "configuring", 00:30:32.800 "raid_level": "raid5f", 00:30:32.800 "superblock": false, 00:30:32.800 "num_base_bdevs": 4, 00:30:32.800 "num_base_bdevs_discovered": 2, 00:30:32.800 "num_base_bdevs_operational": 4, 00:30:32.800 "base_bdevs_list": [ 00:30:32.800 { 00:30:32.800 "name": "BaseBdev1", 00:30:32.800 "uuid": "9105078f-53db-449a-b718-37053c0824cc", 00:30:32.800 "is_configured": true, 00:30:32.800 "data_offset": 0, 00:30:32.800 "data_size": 65536 00:30:32.800 }, 00:30:32.800 { 00:30:32.800 "name": null, 00:30:32.800 "uuid": "c28ffde1-c8a9-41f6-88a3-a020bc5dad1c", 00:30:32.800 "is_configured": false, 00:30:32.800 "data_offset": 0, 00:30:32.800 "data_size": 65536 00:30:32.800 }, 00:30:32.800 { 00:30:32.800 "name": null, 00:30:32.800 "uuid": "c86d62c4-6621-4bb2-bbb6-b8a0037f4107", 00:30:32.800 "is_configured": false, 00:30:32.800 "data_offset": 0, 00:30:32.800 "data_size": 65536 00:30:32.800 }, 00:30:32.800 { 00:30:32.800 "name": "BaseBdev4", 00:30:32.800 "uuid": "a77947ef-3041-4b68-80c2-02cb5166e82c", 00:30:32.800 "is_configured": true, 00:30:32.800 "data_offset": 0, 00:30:32.800 "data_size": 65536 00:30:32.800 } 00:30:32.800 ] 00:30:32.800 }' 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:32.800 13:50:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.402 [2024-11-20 13:50:36.181229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:33.402 "name": "Existed_Raid", 00:30:33.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:33.402 "strip_size_kb": 64, 00:30:33.402 "state": "configuring", 00:30:33.402 "raid_level": "raid5f", 00:30:33.402 "superblock": false, 00:30:33.402 "num_base_bdevs": 4, 00:30:33.402 "num_base_bdevs_discovered": 3, 00:30:33.402 "num_base_bdevs_operational": 4, 00:30:33.402 "base_bdevs_list": [ 00:30:33.402 { 00:30:33.402 "name": "BaseBdev1", 00:30:33.402 "uuid": "9105078f-53db-449a-b718-37053c0824cc", 00:30:33.402 "is_configured": true, 00:30:33.402 "data_offset": 0, 00:30:33.402 "data_size": 65536 00:30:33.402 }, 00:30:33.402 { 00:30:33.402 "name": null, 00:30:33.402 "uuid": "c28ffde1-c8a9-41f6-88a3-a020bc5dad1c", 00:30:33.402 "is_configured": false, 00:30:33.402 "data_offset": 0, 00:30:33.402 "data_size": 65536 00:30:33.402 }, 00:30:33.402 { 00:30:33.402 "name": "BaseBdev3", 00:30:33.402 "uuid": "c86d62c4-6621-4bb2-bbb6-b8a0037f4107", 00:30:33.402 "is_configured": true, 00:30:33.402 "data_offset": 0, 00:30:33.402 "data_size": 65536 00:30:33.402 }, 00:30:33.402 { 00:30:33.402 "name": "BaseBdev4", 00:30:33.402 "uuid": "a77947ef-3041-4b68-80c2-02cb5166e82c", 00:30:33.402 "is_configured": true, 00:30:33.402 "data_offset": 0, 00:30:33.402 "data_size": 65536 00:30:33.402 } 00:30:33.402 ] 00:30:33.402 }' 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:33.402 13:50:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.969 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:33.969 13:50:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.969 13:50:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.969 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:33.969 13:50:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.969 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:30:33.969 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:33.969 13:50:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.969 13:50:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.969 [2024-11-20 13:50:36.793484] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:33.969 13:50:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.969 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:33.969 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:33.969 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:33.969 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:33.969 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:33.969 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:33.969 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:33.969 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:33.969 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:33.969 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:34.229 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:34.229 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:34.229 13:50:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.229 13:50:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:34.229 13:50:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.229 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:34.229 "name": "Existed_Raid", 00:30:34.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:34.229 "strip_size_kb": 64, 00:30:34.229 "state": "configuring", 00:30:34.229 "raid_level": "raid5f", 00:30:34.229 "superblock": false, 00:30:34.229 "num_base_bdevs": 4, 00:30:34.229 "num_base_bdevs_discovered": 2, 00:30:34.229 "num_base_bdevs_operational": 4, 00:30:34.229 "base_bdevs_list": [ 00:30:34.229 { 00:30:34.229 "name": null, 00:30:34.229 "uuid": "9105078f-53db-449a-b718-37053c0824cc", 00:30:34.229 "is_configured": false, 00:30:34.229 "data_offset": 0, 00:30:34.229 "data_size": 65536 00:30:34.229 }, 00:30:34.229 { 00:30:34.229 "name": null, 00:30:34.229 "uuid": "c28ffde1-c8a9-41f6-88a3-a020bc5dad1c", 00:30:34.229 "is_configured": false, 00:30:34.229 "data_offset": 0, 00:30:34.229 "data_size": 65536 00:30:34.229 }, 00:30:34.229 { 00:30:34.229 "name": "BaseBdev3", 00:30:34.229 "uuid": "c86d62c4-6621-4bb2-bbb6-b8a0037f4107", 00:30:34.229 "is_configured": true, 00:30:34.229 "data_offset": 0, 00:30:34.229 "data_size": 65536 00:30:34.229 }, 00:30:34.229 { 00:30:34.229 "name": "BaseBdev4", 00:30:34.229 "uuid": "a77947ef-3041-4b68-80c2-02cb5166e82c", 00:30:34.229 "is_configured": true, 00:30:34.229 "data_offset": 0, 00:30:34.229 "data_size": 65536 00:30:34.229 } 00:30:34.229 ] 00:30:34.229 }' 00:30:34.229 13:50:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:34.229 13:50:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:34.795 [2024-11-20 13:50:37.466862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:34.795 "name": "Existed_Raid", 00:30:34.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:34.795 "strip_size_kb": 64, 00:30:34.795 "state": "configuring", 00:30:34.795 "raid_level": "raid5f", 00:30:34.795 "superblock": false, 00:30:34.795 "num_base_bdevs": 4, 00:30:34.795 "num_base_bdevs_discovered": 3, 00:30:34.795 "num_base_bdevs_operational": 4, 00:30:34.795 "base_bdevs_list": [ 00:30:34.795 { 00:30:34.795 "name": null, 00:30:34.795 "uuid": "9105078f-53db-449a-b718-37053c0824cc", 00:30:34.795 "is_configured": false, 00:30:34.795 "data_offset": 0, 00:30:34.795 "data_size": 65536 00:30:34.795 }, 00:30:34.795 { 00:30:34.795 "name": "BaseBdev2", 00:30:34.795 "uuid": "c28ffde1-c8a9-41f6-88a3-a020bc5dad1c", 00:30:34.795 "is_configured": true, 00:30:34.795 "data_offset": 0, 00:30:34.795 "data_size": 65536 00:30:34.795 }, 00:30:34.795 { 00:30:34.795 "name": "BaseBdev3", 00:30:34.795 "uuid": "c86d62c4-6621-4bb2-bbb6-b8a0037f4107", 00:30:34.795 "is_configured": true, 00:30:34.795 "data_offset": 0, 00:30:34.795 "data_size": 65536 00:30:34.795 }, 00:30:34.795 { 00:30:34.795 "name": "BaseBdev4", 00:30:34.795 "uuid": "a77947ef-3041-4b68-80c2-02cb5166e82c", 00:30:34.795 "is_configured": true, 00:30:34.795 "data_offset": 0, 00:30:34.795 "data_size": 65536 00:30:34.795 } 00:30:34.795 ] 00:30:34.795 }' 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:34.795 13:50:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:35.363 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:35.363 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.363 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:35.363 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9105078f-53db-449a-b718-37053c0824cc 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:35.364 [2024-11-20 13:50:38.147062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:30:35.364 [2024-11-20 13:50:38.147300] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:30:35.364 [2024-11-20 13:50:38.147331] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:30:35.364 [2024-11-20 13:50:38.147694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:30:35.364 NewBaseBdev 00:30:35.364 [2024-11-20 13:50:38.153993] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:30:35.364 [2024-11-20 13:50:38.154023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:30:35.364 [2024-11-20 13:50:38.154338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:35.364 [ 00:30:35.364 { 00:30:35.364 "name": "NewBaseBdev", 00:30:35.364 "aliases": [ 00:30:35.364 "9105078f-53db-449a-b718-37053c0824cc" 00:30:35.364 ], 00:30:35.364 "product_name": "Malloc disk", 00:30:35.364 "block_size": 512, 00:30:35.364 "num_blocks": 65536, 00:30:35.364 "uuid": "9105078f-53db-449a-b718-37053c0824cc", 00:30:35.364 "assigned_rate_limits": { 00:30:35.364 "rw_ios_per_sec": 0, 00:30:35.364 "rw_mbytes_per_sec": 0, 00:30:35.364 "r_mbytes_per_sec": 0, 00:30:35.364 "w_mbytes_per_sec": 0 00:30:35.364 }, 00:30:35.364 "claimed": true, 00:30:35.364 "claim_type": "exclusive_write", 00:30:35.364 "zoned": false, 00:30:35.364 "supported_io_types": { 00:30:35.364 "read": true, 00:30:35.364 "write": true, 00:30:35.364 "unmap": true, 00:30:35.364 "flush": true, 00:30:35.364 "reset": true, 00:30:35.364 "nvme_admin": false, 00:30:35.364 "nvme_io": false, 00:30:35.364 "nvme_io_md": false, 00:30:35.364 "write_zeroes": true, 00:30:35.364 "zcopy": true, 00:30:35.364 "get_zone_info": false, 00:30:35.364 "zone_management": false, 00:30:35.364 "zone_append": false, 00:30:35.364 "compare": false, 00:30:35.364 "compare_and_write": false, 00:30:35.364 "abort": true, 00:30:35.364 "seek_hole": false, 00:30:35.364 "seek_data": false, 00:30:35.364 "copy": true, 00:30:35.364 "nvme_iov_md": false 00:30:35.364 }, 00:30:35.364 "memory_domains": [ 00:30:35.364 { 00:30:35.364 "dma_device_id": "system", 00:30:35.364 "dma_device_type": 1 00:30:35.364 }, 00:30:35.364 { 00:30:35.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:35.364 "dma_device_type": 2 00:30:35.364 } 00:30:35.364 ], 00:30:35.364 "driver_specific": {} 00:30:35.364 } 00:30:35.364 ] 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:35.364 "name": "Existed_Raid", 00:30:35.364 "uuid": "8c924e09-53d3-42f2-b7a9-e0c755897c94", 00:30:35.364 "strip_size_kb": 64, 00:30:35.364 "state": "online", 00:30:35.364 "raid_level": "raid5f", 00:30:35.364 "superblock": false, 00:30:35.364 "num_base_bdevs": 4, 00:30:35.364 "num_base_bdevs_discovered": 4, 00:30:35.364 "num_base_bdevs_operational": 4, 00:30:35.364 "base_bdevs_list": [ 00:30:35.364 { 00:30:35.364 "name": "NewBaseBdev", 00:30:35.364 "uuid": "9105078f-53db-449a-b718-37053c0824cc", 00:30:35.364 "is_configured": true, 00:30:35.364 "data_offset": 0, 00:30:35.364 "data_size": 65536 00:30:35.364 }, 00:30:35.364 { 00:30:35.364 "name": "BaseBdev2", 00:30:35.364 "uuid": "c28ffde1-c8a9-41f6-88a3-a020bc5dad1c", 00:30:35.364 "is_configured": true, 00:30:35.364 "data_offset": 0, 00:30:35.364 "data_size": 65536 00:30:35.364 }, 00:30:35.364 { 00:30:35.364 "name": "BaseBdev3", 00:30:35.364 "uuid": "c86d62c4-6621-4bb2-bbb6-b8a0037f4107", 00:30:35.364 "is_configured": true, 00:30:35.364 "data_offset": 0, 00:30:35.364 "data_size": 65536 00:30:35.364 }, 00:30:35.364 { 00:30:35.364 "name": "BaseBdev4", 00:30:35.364 "uuid": "a77947ef-3041-4b68-80c2-02cb5166e82c", 00:30:35.364 "is_configured": true, 00:30:35.364 "data_offset": 0, 00:30:35.364 "data_size": 65536 00:30:35.364 } 00:30:35.364 ] 00:30:35.364 }' 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:35.364 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:35.931 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:30:35.931 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:35.931 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:35.931 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:35.931 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:35.931 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:35.931 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:35.931 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.931 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:35.931 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:35.931 [2024-11-20 13:50:38.737925] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:35.931 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.932 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:35.932 "name": "Existed_Raid", 00:30:35.932 "aliases": [ 00:30:35.932 "8c924e09-53d3-42f2-b7a9-e0c755897c94" 00:30:35.932 ], 00:30:35.932 "product_name": "Raid Volume", 00:30:35.932 "block_size": 512, 00:30:35.932 "num_blocks": 196608, 00:30:35.932 "uuid": "8c924e09-53d3-42f2-b7a9-e0c755897c94", 00:30:35.932 "assigned_rate_limits": { 00:30:35.932 "rw_ios_per_sec": 0, 00:30:35.932 "rw_mbytes_per_sec": 0, 00:30:35.932 "r_mbytes_per_sec": 0, 00:30:35.932 "w_mbytes_per_sec": 0 00:30:35.932 }, 00:30:35.932 "claimed": false, 00:30:35.932 "zoned": false, 00:30:35.932 "supported_io_types": { 00:30:35.932 "read": true, 00:30:35.932 "write": true, 00:30:35.932 "unmap": false, 00:30:35.932 "flush": false, 00:30:35.932 "reset": true, 00:30:35.932 "nvme_admin": false, 00:30:35.932 "nvme_io": false, 00:30:35.932 "nvme_io_md": false, 00:30:35.932 "write_zeroes": true, 00:30:35.932 "zcopy": false, 00:30:35.932 "get_zone_info": false, 00:30:35.932 "zone_management": false, 00:30:35.932 "zone_append": false, 00:30:35.932 "compare": false, 00:30:35.932 "compare_and_write": false, 00:30:35.932 "abort": false, 00:30:35.932 "seek_hole": false, 00:30:35.932 "seek_data": false, 00:30:35.932 "copy": false, 00:30:35.932 "nvme_iov_md": false 00:30:35.932 }, 00:30:35.932 "driver_specific": { 00:30:35.932 "raid": { 00:30:35.932 "uuid": "8c924e09-53d3-42f2-b7a9-e0c755897c94", 00:30:35.932 "strip_size_kb": 64, 00:30:35.932 "state": "online", 00:30:35.932 "raid_level": "raid5f", 00:30:35.932 "superblock": false, 00:30:35.932 "num_base_bdevs": 4, 00:30:35.932 "num_base_bdevs_discovered": 4, 00:30:35.932 "num_base_bdevs_operational": 4, 00:30:35.932 "base_bdevs_list": [ 00:30:35.932 { 00:30:35.932 "name": "NewBaseBdev", 00:30:35.932 "uuid": "9105078f-53db-449a-b718-37053c0824cc", 00:30:35.932 "is_configured": true, 00:30:35.932 "data_offset": 0, 00:30:35.932 "data_size": 65536 00:30:35.932 }, 00:30:35.932 { 00:30:35.932 "name": "BaseBdev2", 00:30:35.932 "uuid": "c28ffde1-c8a9-41f6-88a3-a020bc5dad1c", 00:30:35.932 "is_configured": true, 00:30:35.932 "data_offset": 0, 00:30:35.932 "data_size": 65536 00:30:35.932 }, 00:30:35.932 { 00:30:35.932 "name": "BaseBdev3", 00:30:35.932 "uuid": "c86d62c4-6621-4bb2-bbb6-b8a0037f4107", 00:30:35.932 "is_configured": true, 00:30:35.932 "data_offset": 0, 00:30:35.932 "data_size": 65536 00:30:35.932 }, 00:30:35.932 { 00:30:35.932 "name": "BaseBdev4", 00:30:35.932 "uuid": "a77947ef-3041-4b68-80c2-02cb5166e82c", 00:30:35.932 "is_configured": true, 00:30:35.932 "data_offset": 0, 00:30:35.932 "data_size": 65536 00:30:35.932 } 00:30:35.932 ] 00:30:35.932 } 00:30:35.932 } 00:30:35.932 }' 00:30:35.932 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:36.213 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:30:36.213 BaseBdev2 00:30:36.213 BaseBdev3 00:30:36.213 BaseBdev4' 00:30:36.213 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:36.213 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:36.213 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:36.213 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:30:36.213 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:36.213 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.213 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:36.213 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.213 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:36.213 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:36.213 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:36.213 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:36.213 13:50:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:36.213 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.213 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:36.213 13:50:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.213 13:50:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:36.213 13:50:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:36.213 13:50:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:36.213 13:50:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:36.213 13:50:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.213 13:50:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:36.213 13:50:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:36.213 13:50:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.213 13:50:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:36.213 13:50:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:36.213 13:50:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:36.213 13:50:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:30:36.213 13:50:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:36.213 13:50:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.213 13:50:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:36.213 13:50:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.213 13:50:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:36.213 13:50:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:36.213 13:50:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:36.213 13:50:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.213 13:50:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:36.472 [2024-11-20 13:50:39.129785] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:36.472 [2024-11-20 13:50:39.129997] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:36.472 [2024-11-20 13:50:39.130217] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:36.472 [2024-11-20 13:50:39.130742] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:36.472 [2024-11-20 13:50:39.130939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:30:36.472 13:50:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.472 13:50:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83401 00:30:36.472 13:50:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83401 ']' 00:30:36.472 13:50:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83401 00:30:36.472 13:50:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:30:36.472 13:50:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:36.472 13:50:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83401 00:30:36.472 13:50:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:36.472 13:50:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:36.472 13:50:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83401' 00:30:36.472 killing process with pid 83401 00:30:36.472 13:50:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83401 00:30:36.472 [2024-11-20 13:50:39.171123] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:36.472 13:50:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83401 00:30:36.731 [2024-11-20 13:50:39.503071] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:37.665 13:50:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:30:37.665 00:30:37.665 real 0m13.281s 00:30:37.665 user 0m22.047s 00:30:37.665 sys 0m1.990s 00:30:37.665 13:50:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:37.665 ************************************ 00:30:37.665 END TEST raid5f_state_function_test 00:30:37.665 ************************************ 00:30:37.665 13:50:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:37.923 13:50:40 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:30:37.923 13:50:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:30:37.923 13:50:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:37.923 13:50:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:37.924 ************************************ 00:30:37.924 START TEST raid5f_state_function_test_sb 00:30:37.924 ************************************ 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:30:37.924 Process raid pid: 84085 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84085 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84085' 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84085 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84085 ']' 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:37.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:37.924 13:50:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.924 [2024-11-20 13:50:40.722736] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:30:37.924 [2024-11-20 13:50:40.722978] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:38.183 [2024-11-20 13:50:40.915556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.183 [2024-11-20 13:50:41.049009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.441 [2024-11-20 13:50:41.259518] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:38.441 [2024-11-20 13:50:41.259559] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:39.007 13:50:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:39.007 13:50:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:30:39.007 13:50:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:39.007 13:50:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.007 13:50:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.007 [2024-11-20 13:50:41.737054] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:39.007 [2024-11-20 13:50:41.737120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:39.007 [2024-11-20 13:50:41.737144] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:39.007 [2024-11-20 13:50:41.737161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:39.007 [2024-11-20 13:50:41.737171] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:39.007 [2024-11-20 13:50:41.737185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:39.007 [2024-11-20 13:50:41.737194] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:39.007 [2024-11-20 13:50:41.737207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:39.007 13:50:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.007 13:50:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:39.007 13:50:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:39.007 13:50:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:39.007 13:50:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:39.007 13:50:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:39.007 13:50:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:39.007 13:50:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:39.007 13:50:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:39.007 13:50:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:39.007 13:50:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:39.007 13:50:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:39.007 13:50:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:39.007 13:50:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.007 13:50:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.007 13:50:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.007 13:50:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:39.008 "name": "Existed_Raid", 00:30:39.008 "uuid": "03390fe6-d25d-439a-8a82-7eeec9409283", 00:30:39.008 "strip_size_kb": 64, 00:30:39.008 "state": "configuring", 00:30:39.008 "raid_level": "raid5f", 00:30:39.008 "superblock": true, 00:30:39.008 "num_base_bdevs": 4, 00:30:39.008 "num_base_bdevs_discovered": 0, 00:30:39.008 "num_base_bdevs_operational": 4, 00:30:39.008 "base_bdevs_list": [ 00:30:39.008 { 00:30:39.008 "name": "BaseBdev1", 00:30:39.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.008 "is_configured": false, 00:30:39.008 "data_offset": 0, 00:30:39.008 "data_size": 0 00:30:39.008 }, 00:30:39.008 { 00:30:39.008 "name": "BaseBdev2", 00:30:39.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.008 "is_configured": false, 00:30:39.008 "data_offset": 0, 00:30:39.008 "data_size": 0 00:30:39.008 }, 00:30:39.008 { 00:30:39.008 "name": "BaseBdev3", 00:30:39.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.008 "is_configured": false, 00:30:39.008 "data_offset": 0, 00:30:39.008 "data_size": 0 00:30:39.008 }, 00:30:39.008 { 00:30:39.008 "name": "BaseBdev4", 00:30:39.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.008 "is_configured": false, 00:30:39.008 "data_offset": 0, 00:30:39.008 "data_size": 0 00:30:39.008 } 00:30:39.008 ] 00:30:39.008 }' 00:30:39.008 13:50:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:39.008 13:50:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.574 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:39.574 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.574 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.574 [2024-11-20 13:50:42.281214] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:39.574 [2024-11-20 13:50:42.281446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:30:39.574 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.575 [2024-11-20 13:50:42.289267] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:39.575 [2024-11-20 13:50:42.289512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:39.575 [2024-11-20 13:50:42.289663] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:39.575 [2024-11-20 13:50:42.289709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:39.575 [2024-11-20 13:50:42.289722] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:39.575 [2024-11-20 13:50:42.289737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:39.575 [2024-11-20 13:50:42.289747] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:39.575 [2024-11-20 13:50:42.289762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.575 [2024-11-20 13:50:42.333414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:39.575 BaseBdev1 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.575 [ 00:30:39.575 { 00:30:39.575 "name": "BaseBdev1", 00:30:39.575 "aliases": [ 00:30:39.575 "0c9ce77f-41cd-4287-b517-d6fb2ac281b6" 00:30:39.575 ], 00:30:39.575 "product_name": "Malloc disk", 00:30:39.575 "block_size": 512, 00:30:39.575 "num_blocks": 65536, 00:30:39.575 "uuid": "0c9ce77f-41cd-4287-b517-d6fb2ac281b6", 00:30:39.575 "assigned_rate_limits": { 00:30:39.575 "rw_ios_per_sec": 0, 00:30:39.575 "rw_mbytes_per_sec": 0, 00:30:39.575 "r_mbytes_per_sec": 0, 00:30:39.575 "w_mbytes_per_sec": 0 00:30:39.575 }, 00:30:39.575 "claimed": true, 00:30:39.575 "claim_type": "exclusive_write", 00:30:39.575 "zoned": false, 00:30:39.575 "supported_io_types": { 00:30:39.575 "read": true, 00:30:39.575 "write": true, 00:30:39.575 "unmap": true, 00:30:39.575 "flush": true, 00:30:39.575 "reset": true, 00:30:39.575 "nvme_admin": false, 00:30:39.575 "nvme_io": false, 00:30:39.575 "nvme_io_md": false, 00:30:39.575 "write_zeroes": true, 00:30:39.575 "zcopy": true, 00:30:39.575 "get_zone_info": false, 00:30:39.575 "zone_management": false, 00:30:39.575 "zone_append": false, 00:30:39.575 "compare": false, 00:30:39.575 "compare_and_write": false, 00:30:39.575 "abort": true, 00:30:39.575 "seek_hole": false, 00:30:39.575 "seek_data": false, 00:30:39.575 "copy": true, 00:30:39.575 "nvme_iov_md": false 00:30:39.575 }, 00:30:39.575 "memory_domains": [ 00:30:39.575 { 00:30:39.575 "dma_device_id": "system", 00:30:39.575 "dma_device_type": 1 00:30:39.575 }, 00:30:39.575 { 00:30:39.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:39.575 "dma_device_type": 2 00:30:39.575 } 00:30:39.575 ], 00:30:39.575 "driver_specific": {} 00:30:39.575 } 00:30:39.575 ] 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:39.575 "name": "Existed_Raid", 00:30:39.575 "uuid": "ac7ea869-541b-47df-8f03-cd2c41c92c98", 00:30:39.575 "strip_size_kb": 64, 00:30:39.575 "state": "configuring", 00:30:39.575 "raid_level": "raid5f", 00:30:39.575 "superblock": true, 00:30:39.575 "num_base_bdevs": 4, 00:30:39.575 "num_base_bdevs_discovered": 1, 00:30:39.575 "num_base_bdevs_operational": 4, 00:30:39.575 "base_bdevs_list": [ 00:30:39.575 { 00:30:39.575 "name": "BaseBdev1", 00:30:39.575 "uuid": "0c9ce77f-41cd-4287-b517-d6fb2ac281b6", 00:30:39.575 "is_configured": true, 00:30:39.575 "data_offset": 2048, 00:30:39.575 "data_size": 63488 00:30:39.575 }, 00:30:39.575 { 00:30:39.575 "name": "BaseBdev2", 00:30:39.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.575 "is_configured": false, 00:30:39.575 "data_offset": 0, 00:30:39.575 "data_size": 0 00:30:39.575 }, 00:30:39.575 { 00:30:39.575 "name": "BaseBdev3", 00:30:39.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.575 "is_configured": false, 00:30:39.575 "data_offset": 0, 00:30:39.575 "data_size": 0 00:30:39.575 }, 00:30:39.575 { 00:30:39.575 "name": "BaseBdev4", 00:30:39.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.575 "is_configured": false, 00:30:39.575 "data_offset": 0, 00:30:39.575 "data_size": 0 00:30:39.575 } 00:30:39.575 ] 00:30:39.575 }' 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:39.575 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.142 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:40.142 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.142 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.143 [2024-11-20 13:50:42.865635] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:40.143 [2024-11-20 13:50:42.865696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.143 [2024-11-20 13:50:42.873731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:40.143 [2024-11-20 13:50:42.876494] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:40.143 [2024-11-20 13:50:42.876720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:40.143 [2024-11-20 13:50:42.876866] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:40.143 [2024-11-20 13:50:42.877051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:40.143 [2024-11-20 13:50:42.877166] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:40.143 [2024-11-20 13:50:42.877326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:40.143 "name": "Existed_Raid", 00:30:40.143 "uuid": "7f19f1ef-b986-4adc-aa45-af318ba29eaa", 00:30:40.143 "strip_size_kb": 64, 00:30:40.143 "state": "configuring", 00:30:40.143 "raid_level": "raid5f", 00:30:40.143 "superblock": true, 00:30:40.143 "num_base_bdevs": 4, 00:30:40.143 "num_base_bdevs_discovered": 1, 00:30:40.143 "num_base_bdevs_operational": 4, 00:30:40.143 "base_bdevs_list": [ 00:30:40.143 { 00:30:40.143 "name": "BaseBdev1", 00:30:40.143 "uuid": "0c9ce77f-41cd-4287-b517-d6fb2ac281b6", 00:30:40.143 "is_configured": true, 00:30:40.143 "data_offset": 2048, 00:30:40.143 "data_size": 63488 00:30:40.143 }, 00:30:40.143 { 00:30:40.143 "name": "BaseBdev2", 00:30:40.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.143 "is_configured": false, 00:30:40.143 "data_offset": 0, 00:30:40.143 "data_size": 0 00:30:40.143 }, 00:30:40.143 { 00:30:40.143 "name": "BaseBdev3", 00:30:40.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.143 "is_configured": false, 00:30:40.143 "data_offset": 0, 00:30:40.143 "data_size": 0 00:30:40.143 }, 00:30:40.143 { 00:30:40.143 "name": "BaseBdev4", 00:30:40.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.143 "is_configured": false, 00:30:40.143 "data_offset": 0, 00:30:40.143 "data_size": 0 00:30:40.143 } 00:30:40.143 ] 00:30:40.143 }' 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:40.143 13:50:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.711 [2024-11-20 13:50:43.469472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:40.711 BaseBdev2 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.711 [ 00:30:40.711 { 00:30:40.711 "name": "BaseBdev2", 00:30:40.711 "aliases": [ 00:30:40.711 "1149f242-b376-4c6e-a2ff-a0dd34368728" 00:30:40.711 ], 00:30:40.711 "product_name": "Malloc disk", 00:30:40.711 "block_size": 512, 00:30:40.711 "num_blocks": 65536, 00:30:40.711 "uuid": "1149f242-b376-4c6e-a2ff-a0dd34368728", 00:30:40.711 "assigned_rate_limits": { 00:30:40.711 "rw_ios_per_sec": 0, 00:30:40.711 "rw_mbytes_per_sec": 0, 00:30:40.711 "r_mbytes_per_sec": 0, 00:30:40.711 "w_mbytes_per_sec": 0 00:30:40.711 }, 00:30:40.711 "claimed": true, 00:30:40.711 "claim_type": "exclusive_write", 00:30:40.711 "zoned": false, 00:30:40.711 "supported_io_types": { 00:30:40.711 "read": true, 00:30:40.711 "write": true, 00:30:40.711 "unmap": true, 00:30:40.711 "flush": true, 00:30:40.711 "reset": true, 00:30:40.711 "nvme_admin": false, 00:30:40.711 "nvme_io": false, 00:30:40.711 "nvme_io_md": false, 00:30:40.711 "write_zeroes": true, 00:30:40.711 "zcopy": true, 00:30:40.711 "get_zone_info": false, 00:30:40.711 "zone_management": false, 00:30:40.711 "zone_append": false, 00:30:40.711 "compare": false, 00:30:40.711 "compare_and_write": false, 00:30:40.711 "abort": true, 00:30:40.711 "seek_hole": false, 00:30:40.711 "seek_data": false, 00:30:40.711 "copy": true, 00:30:40.711 "nvme_iov_md": false 00:30:40.711 }, 00:30:40.711 "memory_domains": [ 00:30:40.711 { 00:30:40.711 "dma_device_id": "system", 00:30:40.711 "dma_device_type": 1 00:30:40.711 }, 00:30:40.711 { 00:30:40.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:40.711 "dma_device_type": 2 00:30:40.711 } 00:30:40.711 ], 00:30:40.711 "driver_specific": {} 00:30:40.711 } 00:30:40.711 ] 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.711 13:50:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:40.711 "name": "Existed_Raid", 00:30:40.711 "uuid": "7f19f1ef-b986-4adc-aa45-af318ba29eaa", 00:30:40.711 "strip_size_kb": 64, 00:30:40.711 "state": "configuring", 00:30:40.711 "raid_level": "raid5f", 00:30:40.711 "superblock": true, 00:30:40.711 "num_base_bdevs": 4, 00:30:40.711 "num_base_bdevs_discovered": 2, 00:30:40.712 "num_base_bdevs_operational": 4, 00:30:40.712 "base_bdevs_list": [ 00:30:40.712 { 00:30:40.712 "name": "BaseBdev1", 00:30:40.712 "uuid": "0c9ce77f-41cd-4287-b517-d6fb2ac281b6", 00:30:40.712 "is_configured": true, 00:30:40.712 "data_offset": 2048, 00:30:40.712 "data_size": 63488 00:30:40.712 }, 00:30:40.712 { 00:30:40.712 "name": "BaseBdev2", 00:30:40.712 "uuid": "1149f242-b376-4c6e-a2ff-a0dd34368728", 00:30:40.712 "is_configured": true, 00:30:40.712 "data_offset": 2048, 00:30:40.712 "data_size": 63488 00:30:40.712 }, 00:30:40.712 { 00:30:40.712 "name": "BaseBdev3", 00:30:40.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.712 "is_configured": false, 00:30:40.712 "data_offset": 0, 00:30:40.712 "data_size": 0 00:30:40.712 }, 00:30:40.712 { 00:30:40.712 "name": "BaseBdev4", 00:30:40.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.712 "is_configured": false, 00:30:40.712 "data_offset": 0, 00:30:40.712 "data_size": 0 00:30:40.712 } 00:30:40.712 ] 00:30:40.712 }' 00:30:40.712 13:50:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:40.712 13:50:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.280 [2024-11-20 13:50:44.079901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:41.280 BaseBdev3 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.280 [ 00:30:41.280 { 00:30:41.280 "name": "BaseBdev3", 00:30:41.280 "aliases": [ 00:30:41.280 "ba1604f4-a3e5-4105-b6b9-f82da53f65ef" 00:30:41.280 ], 00:30:41.280 "product_name": "Malloc disk", 00:30:41.280 "block_size": 512, 00:30:41.280 "num_blocks": 65536, 00:30:41.280 "uuid": "ba1604f4-a3e5-4105-b6b9-f82da53f65ef", 00:30:41.280 "assigned_rate_limits": { 00:30:41.280 "rw_ios_per_sec": 0, 00:30:41.280 "rw_mbytes_per_sec": 0, 00:30:41.280 "r_mbytes_per_sec": 0, 00:30:41.280 "w_mbytes_per_sec": 0 00:30:41.280 }, 00:30:41.280 "claimed": true, 00:30:41.280 "claim_type": "exclusive_write", 00:30:41.280 "zoned": false, 00:30:41.280 "supported_io_types": { 00:30:41.280 "read": true, 00:30:41.280 "write": true, 00:30:41.280 "unmap": true, 00:30:41.280 "flush": true, 00:30:41.280 "reset": true, 00:30:41.280 "nvme_admin": false, 00:30:41.280 "nvme_io": false, 00:30:41.280 "nvme_io_md": false, 00:30:41.280 "write_zeroes": true, 00:30:41.280 "zcopy": true, 00:30:41.280 "get_zone_info": false, 00:30:41.280 "zone_management": false, 00:30:41.280 "zone_append": false, 00:30:41.280 "compare": false, 00:30:41.280 "compare_and_write": false, 00:30:41.280 "abort": true, 00:30:41.280 "seek_hole": false, 00:30:41.280 "seek_data": false, 00:30:41.280 "copy": true, 00:30:41.280 "nvme_iov_md": false 00:30:41.280 }, 00:30:41.280 "memory_domains": [ 00:30:41.280 { 00:30:41.280 "dma_device_id": "system", 00:30:41.280 "dma_device_type": 1 00:30:41.280 }, 00:30:41.280 { 00:30:41.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:41.280 "dma_device_type": 2 00:30:41.280 } 00:30:41.280 ], 00:30:41.280 "driver_specific": {} 00:30:41.280 } 00:30:41.280 ] 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:41.280 "name": "Existed_Raid", 00:30:41.280 "uuid": "7f19f1ef-b986-4adc-aa45-af318ba29eaa", 00:30:41.280 "strip_size_kb": 64, 00:30:41.280 "state": "configuring", 00:30:41.280 "raid_level": "raid5f", 00:30:41.280 "superblock": true, 00:30:41.280 "num_base_bdevs": 4, 00:30:41.280 "num_base_bdevs_discovered": 3, 00:30:41.280 "num_base_bdevs_operational": 4, 00:30:41.280 "base_bdevs_list": [ 00:30:41.280 { 00:30:41.280 "name": "BaseBdev1", 00:30:41.280 "uuid": "0c9ce77f-41cd-4287-b517-d6fb2ac281b6", 00:30:41.280 "is_configured": true, 00:30:41.280 "data_offset": 2048, 00:30:41.280 "data_size": 63488 00:30:41.280 }, 00:30:41.280 { 00:30:41.280 "name": "BaseBdev2", 00:30:41.280 "uuid": "1149f242-b376-4c6e-a2ff-a0dd34368728", 00:30:41.280 "is_configured": true, 00:30:41.280 "data_offset": 2048, 00:30:41.280 "data_size": 63488 00:30:41.280 }, 00:30:41.280 { 00:30:41.280 "name": "BaseBdev3", 00:30:41.280 "uuid": "ba1604f4-a3e5-4105-b6b9-f82da53f65ef", 00:30:41.280 "is_configured": true, 00:30:41.280 "data_offset": 2048, 00:30:41.280 "data_size": 63488 00:30:41.280 }, 00:30:41.280 { 00:30:41.280 "name": "BaseBdev4", 00:30:41.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:41.280 "is_configured": false, 00:30:41.280 "data_offset": 0, 00:30:41.280 "data_size": 0 00:30:41.280 } 00:30:41.280 ] 00:30:41.280 }' 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:41.280 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.848 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:30:41.848 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.848 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.848 [2024-11-20 13:50:44.650106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:41.848 [2024-11-20 13:50:44.650730] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:30:41.848 [2024-11-20 13:50:44.650756] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:41.848 [2024-11-20 13:50:44.651112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:30:41.848 BaseBdev4 00:30:41.848 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.848 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:30:41.848 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:30:41.848 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:41.848 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:41.848 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:41.848 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:41.848 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:41.848 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.848 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.848 [2024-11-20 13:50:44.657851] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:30:41.848 [2024-11-20 13:50:44.658071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:30:41.848 [2024-11-20 13:50:44.658592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:41.848 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.848 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:30:41.848 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.848 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.848 [ 00:30:41.848 { 00:30:41.848 "name": "BaseBdev4", 00:30:41.848 "aliases": [ 00:30:41.848 "817bf115-78e2-46af-ab3e-4758d80e2af6" 00:30:41.848 ], 00:30:41.848 "product_name": "Malloc disk", 00:30:41.848 "block_size": 512, 00:30:41.848 "num_blocks": 65536, 00:30:41.848 "uuid": "817bf115-78e2-46af-ab3e-4758d80e2af6", 00:30:41.848 "assigned_rate_limits": { 00:30:41.848 "rw_ios_per_sec": 0, 00:30:41.848 "rw_mbytes_per_sec": 0, 00:30:41.848 "r_mbytes_per_sec": 0, 00:30:41.848 "w_mbytes_per_sec": 0 00:30:41.848 }, 00:30:41.848 "claimed": true, 00:30:41.848 "claim_type": "exclusive_write", 00:30:41.848 "zoned": false, 00:30:41.848 "supported_io_types": { 00:30:41.848 "read": true, 00:30:41.848 "write": true, 00:30:41.848 "unmap": true, 00:30:41.848 "flush": true, 00:30:41.848 "reset": true, 00:30:41.848 "nvme_admin": false, 00:30:41.848 "nvme_io": false, 00:30:41.848 "nvme_io_md": false, 00:30:41.848 "write_zeroes": true, 00:30:41.848 "zcopy": true, 00:30:41.848 "get_zone_info": false, 00:30:41.848 "zone_management": false, 00:30:41.848 "zone_append": false, 00:30:41.848 "compare": false, 00:30:41.848 "compare_and_write": false, 00:30:41.848 "abort": true, 00:30:41.848 "seek_hole": false, 00:30:41.848 "seek_data": false, 00:30:41.848 "copy": true, 00:30:41.848 "nvme_iov_md": false 00:30:41.848 }, 00:30:41.848 "memory_domains": [ 00:30:41.848 { 00:30:41.848 "dma_device_id": "system", 00:30:41.848 "dma_device_type": 1 00:30:41.848 }, 00:30:41.848 { 00:30:41.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:41.848 "dma_device_type": 2 00:30:41.848 } 00:30:41.848 ], 00:30:41.848 "driver_specific": {} 00:30:41.848 } 00:30:41.848 ] 00:30:41.849 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.849 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:41.849 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:41.849 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:41.849 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:30:41.849 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:41.849 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:41.849 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:41.849 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:41.849 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:41.849 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:41.849 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:41.849 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:41.849 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:41.849 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.849 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.849 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.849 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:41.849 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.849 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:41.849 "name": "Existed_Raid", 00:30:41.849 "uuid": "7f19f1ef-b986-4adc-aa45-af318ba29eaa", 00:30:41.849 "strip_size_kb": 64, 00:30:41.849 "state": "online", 00:30:41.849 "raid_level": "raid5f", 00:30:41.849 "superblock": true, 00:30:41.849 "num_base_bdevs": 4, 00:30:41.849 "num_base_bdevs_discovered": 4, 00:30:41.849 "num_base_bdevs_operational": 4, 00:30:41.849 "base_bdevs_list": [ 00:30:41.849 { 00:30:41.849 "name": "BaseBdev1", 00:30:41.849 "uuid": "0c9ce77f-41cd-4287-b517-d6fb2ac281b6", 00:30:41.849 "is_configured": true, 00:30:41.849 "data_offset": 2048, 00:30:41.849 "data_size": 63488 00:30:41.849 }, 00:30:41.849 { 00:30:41.849 "name": "BaseBdev2", 00:30:41.849 "uuid": "1149f242-b376-4c6e-a2ff-a0dd34368728", 00:30:41.849 "is_configured": true, 00:30:41.849 "data_offset": 2048, 00:30:41.849 "data_size": 63488 00:30:41.849 }, 00:30:41.849 { 00:30:41.849 "name": "BaseBdev3", 00:30:41.849 "uuid": "ba1604f4-a3e5-4105-b6b9-f82da53f65ef", 00:30:41.849 "is_configured": true, 00:30:41.849 "data_offset": 2048, 00:30:41.849 "data_size": 63488 00:30:41.849 }, 00:30:41.849 { 00:30:41.849 "name": "BaseBdev4", 00:30:41.849 "uuid": "817bf115-78e2-46af-ab3e-4758d80e2af6", 00:30:41.849 "is_configured": true, 00:30:41.849 "data_offset": 2048, 00:30:41.849 "data_size": 63488 00:30:41.849 } 00:30:41.849 ] 00:30:41.849 }' 00:30:41.849 13:50:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:41.849 13:50:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.415 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:30:42.415 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:42.415 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:42.415 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:42.415 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:30:42.415 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:42.415 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:42.416 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:42.416 13:50:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.416 13:50:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.416 [2024-11-20 13:50:45.218174] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:42.416 13:50:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.416 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:42.416 "name": "Existed_Raid", 00:30:42.416 "aliases": [ 00:30:42.416 "7f19f1ef-b986-4adc-aa45-af318ba29eaa" 00:30:42.416 ], 00:30:42.416 "product_name": "Raid Volume", 00:30:42.416 "block_size": 512, 00:30:42.416 "num_blocks": 190464, 00:30:42.416 "uuid": "7f19f1ef-b986-4adc-aa45-af318ba29eaa", 00:30:42.416 "assigned_rate_limits": { 00:30:42.416 "rw_ios_per_sec": 0, 00:30:42.416 "rw_mbytes_per_sec": 0, 00:30:42.416 "r_mbytes_per_sec": 0, 00:30:42.416 "w_mbytes_per_sec": 0 00:30:42.416 }, 00:30:42.416 "claimed": false, 00:30:42.416 "zoned": false, 00:30:42.416 "supported_io_types": { 00:30:42.416 "read": true, 00:30:42.416 "write": true, 00:30:42.416 "unmap": false, 00:30:42.416 "flush": false, 00:30:42.416 "reset": true, 00:30:42.416 "nvme_admin": false, 00:30:42.416 "nvme_io": false, 00:30:42.416 "nvme_io_md": false, 00:30:42.416 "write_zeroes": true, 00:30:42.416 "zcopy": false, 00:30:42.416 "get_zone_info": false, 00:30:42.416 "zone_management": false, 00:30:42.416 "zone_append": false, 00:30:42.416 "compare": false, 00:30:42.416 "compare_and_write": false, 00:30:42.416 "abort": false, 00:30:42.416 "seek_hole": false, 00:30:42.416 "seek_data": false, 00:30:42.416 "copy": false, 00:30:42.416 "nvme_iov_md": false 00:30:42.416 }, 00:30:42.416 "driver_specific": { 00:30:42.416 "raid": { 00:30:42.416 "uuid": "7f19f1ef-b986-4adc-aa45-af318ba29eaa", 00:30:42.416 "strip_size_kb": 64, 00:30:42.416 "state": "online", 00:30:42.416 "raid_level": "raid5f", 00:30:42.416 "superblock": true, 00:30:42.416 "num_base_bdevs": 4, 00:30:42.416 "num_base_bdevs_discovered": 4, 00:30:42.416 "num_base_bdevs_operational": 4, 00:30:42.416 "base_bdevs_list": [ 00:30:42.416 { 00:30:42.416 "name": "BaseBdev1", 00:30:42.416 "uuid": "0c9ce77f-41cd-4287-b517-d6fb2ac281b6", 00:30:42.416 "is_configured": true, 00:30:42.416 "data_offset": 2048, 00:30:42.416 "data_size": 63488 00:30:42.416 }, 00:30:42.416 { 00:30:42.416 "name": "BaseBdev2", 00:30:42.416 "uuid": "1149f242-b376-4c6e-a2ff-a0dd34368728", 00:30:42.416 "is_configured": true, 00:30:42.416 "data_offset": 2048, 00:30:42.416 "data_size": 63488 00:30:42.416 }, 00:30:42.416 { 00:30:42.416 "name": "BaseBdev3", 00:30:42.416 "uuid": "ba1604f4-a3e5-4105-b6b9-f82da53f65ef", 00:30:42.416 "is_configured": true, 00:30:42.416 "data_offset": 2048, 00:30:42.416 "data_size": 63488 00:30:42.416 }, 00:30:42.416 { 00:30:42.416 "name": "BaseBdev4", 00:30:42.416 "uuid": "817bf115-78e2-46af-ab3e-4758d80e2af6", 00:30:42.416 "is_configured": true, 00:30:42.416 "data_offset": 2048, 00:30:42.416 "data_size": 63488 00:30:42.416 } 00:30:42.416 ] 00:30:42.416 } 00:30:42.416 } 00:30:42.416 }' 00:30:42.416 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:42.416 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:30:42.416 BaseBdev2 00:30:42.416 BaseBdev3 00:30:42.416 BaseBdev4' 00:30:42.416 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:30:42.675 13:50:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.676 13:50:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.676 13:50:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.676 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:42.676 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:42.676 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:42.676 13:50:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.676 13:50:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.934 [2024-11-20 13:50:45.590106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:42.935 "name": "Existed_Raid", 00:30:42.935 "uuid": "7f19f1ef-b986-4adc-aa45-af318ba29eaa", 00:30:42.935 "strip_size_kb": 64, 00:30:42.935 "state": "online", 00:30:42.935 "raid_level": "raid5f", 00:30:42.935 "superblock": true, 00:30:42.935 "num_base_bdevs": 4, 00:30:42.935 "num_base_bdevs_discovered": 3, 00:30:42.935 "num_base_bdevs_operational": 3, 00:30:42.935 "base_bdevs_list": [ 00:30:42.935 { 00:30:42.935 "name": null, 00:30:42.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:42.935 "is_configured": false, 00:30:42.935 "data_offset": 0, 00:30:42.935 "data_size": 63488 00:30:42.935 }, 00:30:42.935 { 00:30:42.935 "name": "BaseBdev2", 00:30:42.935 "uuid": "1149f242-b376-4c6e-a2ff-a0dd34368728", 00:30:42.935 "is_configured": true, 00:30:42.935 "data_offset": 2048, 00:30:42.935 "data_size": 63488 00:30:42.935 }, 00:30:42.935 { 00:30:42.935 "name": "BaseBdev3", 00:30:42.935 "uuid": "ba1604f4-a3e5-4105-b6b9-f82da53f65ef", 00:30:42.935 "is_configured": true, 00:30:42.935 "data_offset": 2048, 00:30:42.935 "data_size": 63488 00:30:42.935 }, 00:30:42.935 { 00:30:42.935 "name": "BaseBdev4", 00:30:42.935 "uuid": "817bf115-78e2-46af-ab3e-4758d80e2af6", 00:30:42.935 "is_configured": true, 00:30:42.935 "data_offset": 2048, 00:30:42.935 "data_size": 63488 00:30:42.935 } 00:30:42.935 ] 00:30:42.935 }' 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:42.935 13:50:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.502 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:30:43.502 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:43.502 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:43.502 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.502 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.502 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:43.502 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.502 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:43.502 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:43.502 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:30:43.502 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.502 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.502 [2024-11-20 13:50:46.262778] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:43.502 [2024-11-20 13:50:46.263172] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:43.502 [2024-11-20 13:50:46.340896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:43.502 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.502 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:43.502 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:43.502 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:43.503 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.503 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:43.503 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.503 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.503 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:43.503 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:43.503 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:30:43.503 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.503 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.503 [2024-11-20 13:50:46.405031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:43.762 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.762 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:43.762 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:43.762 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:43.762 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.762 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:43.762 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.762 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.762 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:43.762 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:43.762 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:30:43.762 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.762 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.762 [2024-11-20 13:50:46.546806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:30:43.762 [2024-11-20 13:50:46.547048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:30:43.762 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.762 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:43.762 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:43.762 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:43.762 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:30:43.762 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.762 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:43.762 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.022 BaseBdev2 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.022 [ 00:30:44.022 { 00:30:44.022 "name": "BaseBdev2", 00:30:44.022 "aliases": [ 00:30:44.022 "2e7f1287-f140-4116-b347-50a91a8662f0" 00:30:44.022 ], 00:30:44.022 "product_name": "Malloc disk", 00:30:44.022 "block_size": 512, 00:30:44.022 "num_blocks": 65536, 00:30:44.022 "uuid": "2e7f1287-f140-4116-b347-50a91a8662f0", 00:30:44.022 "assigned_rate_limits": { 00:30:44.022 "rw_ios_per_sec": 0, 00:30:44.022 "rw_mbytes_per_sec": 0, 00:30:44.022 "r_mbytes_per_sec": 0, 00:30:44.022 "w_mbytes_per_sec": 0 00:30:44.022 }, 00:30:44.022 "claimed": false, 00:30:44.022 "zoned": false, 00:30:44.022 "supported_io_types": { 00:30:44.022 "read": true, 00:30:44.022 "write": true, 00:30:44.022 "unmap": true, 00:30:44.022 "flush": true, 00:30:44.022 "reset": true, 00:30:44.022 "nvme_admin": false, 00:30:44.022 "nvme_io": false, 00:30:44.022 "nvme_io_md": false, 00:30:44.022 "write_zeroes": true, 00:30:44.022 "zcopy": true, 00:30:44.022 "get_zone_info": false, 00:30:44.022 "zone_management": false, 00:30:44.022 "zone_append": false, 00:30:44.022 "compare": false, 00:30:44.022 "compare_and_write": false, 00:30:44.022 "abort": true, 00:30:44.022 "seek_hole": false, 00:30:44.022 "seek_data": false, 00:30:44.022 "copy": true, 00:30:44.022 "nvme_iov_md": false 00:30:44.022 }, 00:30:44.022 "memory_domains": [ 00:30:44.022 { 00:30:44.022 "dma_device_id": "system", 00:30:44.022 "dma_device_type": 1 00:30:44.022 }, 00:30:44.022 { 00:30:44.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:44.022 "dma_device_type": 2 00:30:44.022 } 00:30:44.022 ], 00:30:44.022 "driver_specific": {} 00:30:44.022 } 00:30:44.022 ] 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.022 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.022 BaseBdev3 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.023 [ 00:30:44.023 { 00:30:44.023 "name": "BaseBdev3", 00:30:44.023 "aliases": [ 00:30:44.023 "e02b7003-3183-4454-a077-1207fc4cb72e" 00:30:44.023 ], 00:30:44.023 "product_name": "Malloc disk", 00:30:44.023 "block_size": 512, 00:30:44.023 "num_blocks": 65536, 00:30:44.023 "uuid": "e02b7003-3183-4454-a077-1207fc4cb72e", 00:30:44.023 "assigned_rate_limits": { 00:30:44.023 "rw_ios_per_sec": 0, 00:30:44.023 "rw_mbytes_per_sec": 0, 00:30:44.023 "r_mbytes_per_sec": 0, 00:30:44.023 "w_mbytes_per_sec": 0 00:30:44.023 }, 00:30:44.023 "claimed": false, 00:30:44.023 "zoned": false, 00:30:44.023 "supported_io_types": { 00:30:44.023 "read": true, 00:30:44.023 "write": true, 00:30:44.023 "unmap": true, 00:30:44.023 "flush": true, 00:30:44.023 "reset": true, 00:30:44.023 "nvme_admin": false, 00:30:44.023 "nvme_io": false, 00:30:44.023 "nvme_io_md": false, 00:30:44.023 "write_zeroes": true, 00:30:44.023 "zcopy": true, 00:30:44.023 "get_zone_info": false, 00:30:44.023 "zone_management": false, 00:30:44.023 "zone_append": false, 00:30:44.023 "compare": false, 00:30:44.023 "compare_and_write": false, 00:30:44.023 "abort": true, 00:30:44.023 "seek_hole": false, 00:30:44.023 "seek_data": false, 00:30:44.023 "copy": true, 00:30:44.023 "nvme_iov_md": false 00:30:44.023 }, 00:30:44.023 "memory_domains": [ 00:30:44.023 { 00:30:44.023 "dma_device_id": "system", 00:30:44.023 "dma_device_type": 1 00:30:44.023 }, 00:30:44.023 { 00:30:44.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:44.023 "dma_device_type": 2 00:30:44.023 } 00:30:44.023 ], 00:30:44.023 "driver_specific": {} 00:30:44.023 } 00:30:44.023 ] 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.023 BaseBdev4 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.023 [ 00:30:44.023 { 00:30:44.023 "name": "BaseBdev4", 00:30:44.023 "aliases": [ 00:30:44.023 "56cb5e46-e659-4714-8011-a8be2accc8b1" 00:30:44.023 ], 00:30:44.023 "product_name": "Malloc disk", 00:30:44.023 "block_size": 512, 00:30:44.023 "num_blocks": 65536, 00:30:44.023 "uuid": "56cb5e46-e659-4714-8011-a8be2accc8b1", 00:30:44.023 "assigned_rate_limits": { 00:30:44.023 "rw_ios_per_sec": 0, 00:30:44.023 "rw_mbytes_per_sec": 0, 00:30:44.023 "r_mbytes_per_sec": 0, 00:30:44.023 "w_mbytes_per_sec": 0 00:30:44.023 }, 00:30:44.023 "claimed": false, 00:30:44.023 "zoned": false, 00:30:44.023 "supported_io_types": { 00:30:44.023 "read": true, 00:30:44.023 "write": true, 00:30:44.023 "unmap": true, 00:30:44.023 "flush": true, 00:30:44.023 "reset": true, 00:30:44.023 "nvme_admin": false, 00:30:44.023 "nvme_io": false, 00:30:44.023 "nvme_io_md": false, 00:30:44.023 "write_zeroes": true, 00:30:44.023 "zcopy": true, 00:30:44.023 "get_zone_info": false, 00:30:44.023 "zone_management": false, 00:30:44.023 "zone_append": false, 00:30:44.023 "compare": false, 00:30:44.023 "compare_and_write": false, 00:30:44.023 "abort": true, 00:30:44.023 "seek_hole": false, 00:30:44.023 "seek_data": false, 00:30:44.023 "copy": true, 00:30:44.023 "nvme_iov_md": false 00:30:44.023 }, 00:30:44.023 "memory_domains": [ 00:30:44.023 { 00:30:44.023 "dma_device_id": "system", 00:30:44.023 "dma_device_type": 1 00:30:44.023 }, 00:30:44.023 { 00:30:44.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:44.023 "dma_device_type": 2 00:30:44.023 } 00:30:44.023 ], 00:30:44.023 "driver_specific": {} 00:30:44.023 } 00:30:44.023 ] 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.023 [2024-11-20 13:50:46.899363] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:44.023 [2024-11-20 13:50:46.899561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:44.023 [2024-11-20 13:50:46.899742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:44.023 [2024-11-20 13:50:46.902258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:44.023 [2024-11-20 13:50:46.902332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.023 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.281 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:44.281 "name": "Existed_Raid", 00:30:44.281 "uuid": "4d96ea8d-77dd-4b20-8c49-ebee6baa4889", 00:30:44.281 "strip_size_kb": 64, 00:30:44.281 "state": "configuring", 00:30:44.281 "raid_level": "raid5f", 00:30:44.281 "superblock": true, 00:30:44.281 "num_base_bdevs": 4, 00:30:44.281 "num_base_bdevs_discovered": 3, 00:30:44.281 "num_base_bdevs_operational": 4, 00:30:44.281 "base_bdevs_list": [ 00:30:44.281 { 00:30:44.281 "name": "BaseBdev1", 00:30:44.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:44.281 "is_configured": false, 00:30:44.281 "data_offset": 0, 00:30:44.281 "data_size": 0 00:30:44.281 }, 00:30:44.281 { 00:30:44.281 "name": "BaseBdev2", 00:30:44.281 "uuid": "2e7f1287-f140-4116-b347-50a91a8662f0", 00:30:44.281 "is_configured": true, 00:30:44.281 "data_offset": 2048, 00:30:44.281 "data_size": 63488 00:30:44.281 }, 00:30:44.281 { 00:30:44.281 "name": "BaseBdev3", 00:30:44.281 "uuid": "e02b7003-3183-4454-a077-1207fc4cb72e", 00:30:44.281 "is_configured": true, 00:30:44.281 "data_offset": 2048, 00:30:44.281 "data_size": 63488 00:30:44.281 }, 00:30:44.281 { 00:30:44.281 "name": "BaseBdev4", 00:30:44.281 "uuid": "56cb5e46-e659-4714-8011-a8be2accc8b1", 00:30:44.281 "is_configured": true, 00:30:44.281 "data_offset": 2048, 00:30:44.281 "data_size": 63488 00:30:44.281 } 00:30:44.281 ] 00:30:44.281 }' 00:30:44.281 13:50:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:44.281 13:50:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.539 13:50:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:44.539 13:50:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.539 13:50:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.539 [2024-11-20 13:50:47.415508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:44.539 13:50:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.539 13:50:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:44.539 13:50:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:44.539 13:50:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:44.539 13:50:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:44.539 13:50:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:44.539 13:50:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:44.539 13:50:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:44.539 13:50:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:44.539 13:50:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:44.539 13:50:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:44.539 13:50:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:44.539 13:50:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.539 13:50:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.539 13:50:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:44.539 13:50:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.797 13:50:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:44.797 "name": "Existed_Raid", 00:30:44.797 "uuid": "4d96ea8d-77dd-4b20-8c49-ebee6baa4889", 00:30:44.797 "strip_size_kb": 64, 00:30:44.797 "state": "configuring", 00:30:44.797 "raid_level": "raid5f", 00:30:44.797 "superblock": true, 00:30:44.797 "num_base_bdevs": 4, 00:30:44.797 "num_base_bdevs_discovered": 2, 00:30:44.797 "num_base_bdevs_operational": 4, 00:30:44.797 "base_bdevs_list": [ 00:30:44.797 { 00:30:44.797 "name": "BaseBdev1", 00:30:44.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:44.797 "is_configured": false, 00:30:44.797 "data_offset": 0, 00:30:44.797 "data_size": 0 00:30:44.797 }, 00:30:44.797 { 00:30:44.797 "name": null, 00:30:44.797 "uuid": "2e7f1287-f140-4116-b347-50a91a8662f0", 00:30:44.797 "is_configured": false, 00:30:44.797 "data_offset": 0, 00:30:44.797 "data_size": 63488 00:30:44.797 }, 00:30:44.797 { 00:30:44.797 "name": "BaseBdev3", 00:30:44.797 "uuid": "e02b7003-3183-4454-a077-1207fc4cb72e", 00:30:44.797 "is_configured": true, 00:30:44.797 "data_offset": 2048, 00:30:44.797 "data_size": 63488 00:30:44.797 }, 00:30:44.797 { 00:30:44.797 "name": "BaseBdev4", 00:30:44.797 "uuid": "56cb5e46-e659-4714-8011-a8be2accc8b1", 00:30:44.797 "is_configured": true, 00:30:44.797 "data_offset": 2048, 00:30:44.797 "data_size": 63488 00:30:44.797 } 00:30:44.797 ] 00:30:44.797 }' 00:30:44.797 13:50:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:44.797 13:50:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.363 13:50:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:45.363 13:50:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.363 13:50:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.363 13:50:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:45.363 13:50:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.363 [2024-11-20 13:50:48.062405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:45.363 BaseBdev1 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.363 [ 00:30:45.363 { 00:30:45.363 "name": "BaseBdev1", 00:30:45.363 "aliases": [ 00:30:45.363 "7dae8e88-a586-4220-8563-210c7ec28fab" 00:30:45.363 ], 00:30:45.363 "product_name": "Malloc disk", 00:30:45.363 "block_size": 512, 00:30:45.363 "num_blocks": 65536, 00:30:45.363 "uuid": "7dae8e88-a586-4220-8563-210c7ec28fab", 00:30:45.363 "assigned_rate_limits": { 00:30:45.363 "rw_ios_per_sec": 0, 00:30:45.363 "rw_mbytes_per_sec": 0, 00:30:45.363 "r_mbytes_per_sec": 0, 00:30:45.363 "w_mbytes_per_sec": 0 00:30:45.363 }, 00:30:45.363 "claimed": true, 00:30:45.363 "claim_type": "exclusive_write", 00:30:45.363 "zoned": false, 00:30:45.363 "supported_io_types": { 00:30:45.363 "read": true, 00:30:45.363 "write": true, 00:30:45.363 "unmap": true, 00:30:45.363 "flush": true, 00:30:45.363 "reset": true, 00:30:45.363 "nvme_admin": false, 00:30:45.363 "nvme_io": false, 00:30:45.363 "nvme_io_md": false, 00:30:45.363 "write_zeroes": true, 00:30:45.363 "zcopy": true, 00:30:45.363 "get_zone_info": false, 00:30:45.363 "zone_management": false, 00:30:45.363 "zone_append": false, 00:30:45.363 "compare": false, 00:30:45.363 "compare_and_write": false, 00:30:45.363 "abort": true, 00:30:45.363 "seek_hole": false, 00:30:45.363 "seek_data": false, 00:30:45.363 "copy": true, 00:30:45.363 "nvme_iov_md": false 00:30:45.363 }, 00:30:45.363 "memory_domains": [ 00:30:45.363 { 00:30:45.363 "dma_device_id": "system", 00:30:45.363 "dma_device_type": 1 00:30:45.363 }, 00:30:45.363 { 00:30:45.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:45.363 "dma_device_type": 2 00:30:45.363 } 00:30:45.363 ], 00:30:45.363 "driver_specific": {} 00:30:45.363 } 00:30:45.363 ] 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.363 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:45.363 "name": "Existed_Raid", 00:30:45.363 "uuid": "4d96ea8d-77dd-4b20-8c49-ebee6baa4889", 00:30:45.363 "strip_size_kb": 64, 00:30:45.363 "state": "configuring", 00:30:45.363 "raid_level": "raid5f", 00:30:45.363 "superblock": true, 00:30:45.363 "num_base_bdevs": 4, 00:30:45.363 "num_base_bdevs_discovered": 3, 00:30:45.363 "num_base_bdevs_operational": 4, 00:30:45.363 "base_bdevs_list": [ 00:30:45.363 { 00:30:45.363 "name": "BaseBdev1", 00:30:45.363 "uuid": "7dae8e88-a586-4220-8563-210c7ec28fab", 00:30:45.363 "is_configured": true, 00:30:45.363 "data_offset": 2048, 00:30:45.363 "data_size": 63488 00:30:45.363 }, 00:30:45.363 { 00:30:45.363 "name": null, 00:30:45.363 "uuid": "2e7f1287-f140-4116-b347-50a91a8662f0", 00:30:45.363 "is_configured": false, 00:30:45.363 "data_offset": 0, 00:30:45.363 "data_size": 63488 00:30:45.363 }, 00:30:45.363 { 00:30:45.363 "name": "BaseBdev3", 00:30:45.363 "uuid": "e02b7003-3183-4454-a077-1207fc4cb72e", 00:30:45.363 "is_configured": true, 00:30:45.363 "data_offset": 2048, 00:30:45.363 "data_size": 63488 00:30:45.364 }, 00:30:45.364 { 00:30:45.364 "name": "BaseBdev4", 00:30:45.364 "uuid": "56cb5e46-e659-4714-8011-a8be2accc8b1", 00:30:45.364 "is_configured": true, 00:30:45.364 "data_offset": 2048, 00:30:45.364 "data_size": 63488 00:30:45.364 } 00:30:45.364 ] 00:30:45.364 }' 00:30:45.364 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:45.364 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.928 [2024-11-20 13:50:48.674669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:45.928 "name": "Existed_Raid", 00:30:45.928 "uuid": "4d96ea8d-77dd-4b20-8c49-ebee6baa4889", 00:30:45.928 "strip_size_kb": 64, 00:30:45.928 "state": "configuring", 00:30:45.928 "raid_level": "raid5f", 00:30:45.928 "superblock": true, 00:30:45.928 "num_base_bdevs": 4, 00:30:45.928 "num_base_bdevs_discovered": 2, 00:30:45.928 "num_base_bdevs_operational": 4, 00:30:45.928 "base_bdevs_list": [ 00:30:45.928 { 00:30:45.928 "name": "BaseBdev1", 00:30:45.928 "uuid": "7dae8e88-a586-4220-8563-210c7ec28fab", 00:30:45.928 "is_configured": true, 00:30:45.928 "data_offset": 2048, 00:30:45.928 "data_size": 63488 00:30:45.928 }, 00:30:45.928 { 00:30:45.928 "name": null, 00:30:45.928 "uuid": "2e7f1287-f140-4116-b347-50a91a8662f0", 00:30:45.928 "is_configured": false, 00:30:45.928 "data_offset": 0, 00:30:45.928 "data_size": 63488 00:30:45.928 }, 00:30:45.928 { 00:30:45.928 "name": null, 00:30:45.928 "uuid": "e02b7003-3183-4454-a077-1207fc4cb72e", 00:30:45.928 "is_configured": false, 00:30:45.928 "data_offset": 0, 00:30:45.928 "data_size": 63488 00:30:45.928 }, 00:30:45.928 { 00:30:45.928 "name": "BaseBdev4", 00:30:45.928 "uuid": "56cb5e46-e659-4714-8011-a8be2accc8b1", 00:30:45.928 "is_configured": true, 00:30:45.928 "data_offset": 2048, 00:30:45.928 "data_size": 63488 00:30:45.928 } 00:30:45.928 ] 00:30:45.928 }' 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:45.928 13:50:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.491 [2024-11-20 13:50:49.254821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:46.491 "name": "Existed_Raid", 00:30:46.491 "uuid": "4d96ea8d-77dd-4b20-8c49-ebee6baa4889", 00:30:46.491 "strip_size_kb": 64, 00:30:46.491 "state": "configuring", 00:30:46.491 "raid_level": "raid5f", 00:30:46.491 "superblock": true, 00:30:46.491 "num_base_bdevs": 4, 00:30:46.491 "num_base_bdevs_discovered": 3, 00:30:46.491 "num_base_bdevs_operational": 4, 00:30:46.491 "base_bdevs_list": [ 00:30:46.491 { 00:30:46.491 "name": "BaseBdev1", 00:30:46.491 "uuid": "7dae8e88-a586-4220-8563-210c7ec28fab", 00:30:46.491 "is_configured": true, 00:30:46.491 "data_offset": 2048, 00:30:46.491 "data_size": 63488 00:30:46.491 }, 00:30:46.491 { 00:30:46.491 "name": null, 00:30:46.491 "uuid": "2e7f1287-f140-4116-b347-50a91a8662f0", 00:30:46.491 "is_configured": false, 00:30:46.491 "data_offset": 0, 00:30:46.491 "data_size": 63488 00:30:46.491 }, 00:30:46.491 { 00:30:46.491 "name": "BaseBdev3", 00:30:46.491 "uuid": "e02b7003-3183-4454-a077-1207fc4cb72e", 00:30:46.491 "is_configured": true, 00:30:46.491 "data_offset": 2048, 00:30:46.491 "data_size": 63488 00:30:46.491 }, 00:30:46.491 { 00:30:46.491 "name": "BaseBdev4", 00:30:46.491 "uuid": "56cb5e46-e659-4714-8011-a8be2accc8b1", 00:30:46.491 "is_configured": true, 00:30:46.491 "data_offset": 2048, 00:30:46.491 "data_size": 63488 00:30:46.491 } 00:30:46.491 ] 00:30:46.491 }' 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:46.491 13:50:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:47.057 [2024-11-20 13:50:49.843071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:47.057 13:50:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.316 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:47.316 "name": "Existed_Raid", 00:30:47.316 "uuid": "4d96ea8d-77dd-4b20-8c49-ebee6baa4889", 00:30:47.316 "strip_size_kb": 64, 00:30:47.316 "state": "configuring", 00:30:47.316 "raid_level": "raid5f", 00:30:47.316 "superblock": true, 00:30:47.316 "num_base_bdevs": 4, 00:30:47.316 "num_base_bdevs_discovered": 2, 00:30:47.316 "num_base_bdevs_operational": 4, 00:30:47.316 "base_bdevs_list": [ 00:30:47.316 { 00:30:47.316 "name": null, 00:30:47.316 "uuid": "7dae8e88-a586-4220-8563-210c7ec28fab", 00:30:47.316 "is_configured": false, 00:30:47.316 "data_offset": 0, 00:30:47.316 "data_size": 63488 00:30:47.316 }, 00:30:47.316 { 00:30:47.316 "name": null, 00:30:47.316 "uuid": "2e7f1287-f140-4116-b347-50a91a8662f0", 00:30:47.316 "is_configured": false, 00:30:47.316 "data_offset": 0, 00:30:47.316 "data_size": 63488 00:30:47.316 }, 00:30:47.316 { 00:30:47.316 "name": "BaseBdev3", 00:30:47.316 "uuid": "e02b7003-3183-4454-a077-1207fc4cb72e", 00:30:47.316 "is_configured": true, 00:30:47.316 "data_offset": 2048, 00:30:47.316 "data_size": 63488 00:30:47.316 }, 00:30:47.316 { 00:30:47.316 "name": "BaseBdev4", 00:30:47.316 "uuid": "56cb5e46-e659-4714-8011-a8be2accc8b1", 00:30:47.316 "is_configured": true, 00:30:47.316 "data_offset": 2048, 00:30:47.316 "data_size": 63488 00:30:47.316 } 00:30:47.316 ] 00:30:47.316 }' 00:30:47.316 13:50:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:47.316 13:50:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:47.575 13:50:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:47.575 13:50:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.575 13:50:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:47.575 13:50:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:47.575 13:50:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.834 13:50:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:30:47.834 13:50:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:47.834 13:50:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.834 13:50:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:47.834 [2024-11-20 13:50:50.526016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:47.834 13:50:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.834 13:50:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:30:47.834 13:50:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:47.834 13:50:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:47.834 13:50:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:47.834 13:50:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:47.834 13:50:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:47.834 13:50:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:47.834 13:50:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:47.834 13:50:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:47.834 13:50:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:47.834 13:50:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:47.834 13:50:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:47.834 13:50:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.834 13:50:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:47.834 13:50:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.834 13:50:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:47.834 "name": "Existed_Raid", 00:30:47.834 "uuid": "4d96ea8d-77dd-4b20-8c49-ebee6baa4889", 00:30:47.834 "strip_size_kb": 64, 00:30:47.834 "state": "configuring", 00:30:47.834 "raid_level": "raid5f", 00:30:47.834 "superblock": true, 00:30:47.834 "num_base_bdevs": 4, 00:30:47.834 "num_base_bdevs_discovered": 3, 00:30:47.834 "num_base_bdevs_operational": 4, 00:30:47.834 "base_bdevs_list": [ 00:30:47.834 { 00:30:47.834 "name": null, 00:30:47.834 "uuid": "7dae8e88-a586-4220-8563-210c7ec28fab", 00:30:47.834 "is_configured": false, 00:30:47.834 "data_offset": 0, 00:30:47.834 "data_size": 63488 00:30:47.834 }, 00:30:47.834 { 00:30:47.834 "name": "BaseBdev2", 00:30:47.834 "uuid": "2e7f1287-f140-4116-b347-50a91a8662f0", 00:30:47.834 "is_configured": true, 00:30:47.834 "data_offset": 2048, 00:30:47.834 "data_size": 63488 00:30:47.834 }, 00:30:47.834 { 00:30:47.834 "name": "BaseBdev3", 00:30:47.834 "uuid": "e02b7003-3183-4454-a077-1207fc4cb72e", 00:30:47.834 "is_configured": true, 00:30:47.834 "data_offset": 2048, 00:30:47.834 "data_size": 63488 00:30:47.834 }, 00:30:47.834 { 00:30:47.834 "name": "BaseBdev4", 00:30:47.834 "uuid": "56cb5e46-e659-4714-8011-a8be2accc8b1", 00:30:47.834 "is_configured": true, 00:30:47.834 "data_offset": 2048, 00:30:47.834 "data_size": 63488 00:30:47.834 } 00:30:47.834 ] 00:30:47.834 }' 00:30:47.834 13:50:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:47.834 13:50:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7dae8e88-a586-4220-8563-210c7ec28fab 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:48.402 [2024-11-20 13:50:51.209429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:30:48.402 NewBaseBdev 00:30:48.402 [2024-11-20 13:50:51.210010] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:30:48.402 [2024-11-20 13:50:51.210035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:48.402 [2024-11-20 13:50:51.210402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:48.402 [2024-11-20 13:50:51.217487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:30:48.402 [2024-11-20 13:50:51.217534] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:30:48.402 [2024-11-20 13:50:51.217851] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.402 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:48.402 [ 00:30:48.402 { 00:30:48.402 "name": "NewBaseBdev", 00:30:48.402 "aliases": [ 00:30:48.402 "7dae8e88-a586-4220-8563-210c7ec28fab" 00:30:48.402 ], 00:30:48.402 "product_name": "Malloc disk", 00:30:48.402 "block_size": 512, 00:30:48.402 "num_blocks": 65536, 00:30:48.402 "uuid": "7dae8e88-a586-4220-8563-210c7ec28fab", 00:30:48.402 "assigned_rate_limits": { 00:30:48.402 "rw_ios_per_sec": 0, 00:30:48.402 "rw_mbytes_per_sec": 0, 00:30:48.402 "r_mbytes_per_sec": 0, 00:30:48.402 "w_mbytes_per_sec": 0 00:30:48.402 }, 00:30:48.402 "claimed": true, 00:30:48.402 "claim_type": "exclusive_write", 00:30:48.402 "zoned": false, 00:30:48.403 "supported_io_types": { 00:30:48.403 "read": true, 00:30:48.403 "write": true, 00:30:48.403 "unmap": true, 00:30:48.403 "flush": true, 00:30:48.403 "reset": true, 00:30:48.403 "nvme_admin": false, 00:30:48.403 "nvme_io": false, 00:30:48.403 "nvme_io_md": false, 00:30:48.403 "write_zeroes": true, 00:30:48.403 "zcopy": true, 00:30:48.403 "get_zone_info": false, 00:30:48.403 "zone_management": false, 00:30:48.403 "zone_append": false, 00:30:48.403 "compare": false, 00:30:48.403 "compare_and_write": false, 00:30:48.403 "abort": true, 00:30:48.403 "seek_hole": false, 00:30:48.403 "seek_data": false, 00:30:48.403 "copy": true, 00:30:48.403 "nvme_iov_md": false 00:30:48.403 }, 00:30:48.403 "memory_domains": [ 00:30:48.403 { 00:30:48.403 "dma_device_id": "system", 00:30:48.403 "dma_device_type": 1 00:30:48.403 }, 00:30:48.403 { 00:30:48.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:48.403 "dma_device_type": 2 00:30:48.403 } 00:30:48.403 ], 00:30:48.403 "driver_specific": {} 00:30:48.403 } 00:30:48.403 ] 00:30:48.403 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.403 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:48.403 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:30:48.403 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:48.403 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:48.403 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:48.403 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:48.403 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:48.403 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:48.403 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:48.403 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:48.403 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:48.403 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:48.403 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:48.403 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.403 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:48.403 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.403 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:48.403 "name": "Existed_Raid", 00:30:48.403 "uuid": "4d96ea8d-77dd-4b20-8c49-ebee6baa4889", 00:30:48.403 "strip_size_kb": 64, 00:30:48.403 "state": "online", 00:30:48.403 "raid_level": "raid5f", 00:30:48.403 "superblock": true, 00:30:48.403 "num_base_bdevs": 4, 00:30:48.403 "num_base_bdevs_discovered": 4, 00:30:48.403 "num_base_bdevs_operational": 4, 00:30:48.403 "base_bdevs_list": [ 00:30:48.403 { 00:30:48.403 "name": "NewBaseBdev", 00:30:48.403 "uuid": "7dae8e88-a586-4220-8563-210c7ec28fab", 00:30:48.403 "is_configured": true, 00:30:48.403 "data_offset": 2048, 00:30:48.403 "data_size": 63488 00:30:48.403 }, 00:30:48.403 { 00:30:48.403 "name": "BaseBdev2", 00:30:48.403 "uuid": "2e7f1287-f140-4116-b347-50a91a8662f0", 00:30:48.403 "is_configured": true, 00:30:48.403 "data_offset": 2048, 00:30:48.403 "data_size": 63488 00:30:48.403 }, 00:30:48.403 { 00:30:48.403 "name": "BaseBdev3", 00:30:48.403 "uuid": "e02b7003-3183-4454-a077-1207fc4cb72e", 00:30:48.403 "is_configured": true, 00:30:48.403 "data_offset": 2048, 00:30:48.403 "data_size": 63488 00:30:48.403 }, 00:30:48.403 { 00:30:48.403 "name": "BaseBdev4", 00:30:48.403 "uuid": "56cb5e46-e659-4714-8011-a8be2accc8b1", 00:30:48.403 "is_configured": true, 00:30:48.403 "data_offset": 2048, 00:30:48.403 "data_size": 63488 00:30:48.403 } 00:30:48.403 ] 00:30:48.403 }' 00:30:48.403 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:48.403 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:48.996 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:30:48.996 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:48.996 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:48.996 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:48.996 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:30:48.997 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:48.997 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:48.997 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.997 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:48.997 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:48.997 [2024-11-20 13:50:51.798592] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:48.997 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.997 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:48.997 "name": "Existed_Raid", 00:30:48.997 "aliases": [ 00:30:48.997 "4d96ea8d-77dd-4b20-8c49-ebee6baa4889" 00:30:48.997 ], 00:30:48.997 "product_name": "Raid Volume", 00:30:48.997 "block_size": 512, 00:30:48.997 "num_blocks": 190464, 00:30:48.997 "uuid": "4d96ea8d-77dd-4b20-8c49-ebee6baa4889", 00:30:48.997 "assigned_rate_limits": { 00:30:48.997 "rw_ios_per_sec": 0, 00:30:48.997 "rw_mbytes_per_sec": 0, 00:30:48.997 "r_mbytes_per_sec": 0, 00:30:48.997 "w_mbytes_per_sec": 0 00:30:48.997 }, 00:30:48.997 "claimed": false, 00:30:48.997 "zoned": false, 00:30:48.997 "supported_io_types": { 00:30:48.997 "read": true, 00:30:48.997 "write": true, 00:30:48.997 "unmap": false, 00:30:48.997 "flush": false, 00:30:48.997 "reset": true, 00:30:48.997 "nvme_admin": false, 00:30:48.997 "nvme_io": false, 00:30:48.997 "nvme_io_md": false, 00:30:48.997 "write_zeroes": true, 00:30:48.997 "zcopy": false, 00:30:48.997 "get_zone_info": false, 00:30:48.997 "zone_management": false, 00:30:48.997 "zone_append": false, 00:30:48.997 "compare": false, 00:30:48.997 "compare_and_write": false, 00:30:48.997 "abort": false, 00:30:48.997 "seek_hole": false, 00:30:48.997 "seek_data": false, 00:30:48.997 "copy": false, 00:30:48.997 "nvme_iov_md": false 00:30:48.997 }, 00:30:48.997 "driver_specific": { 00:30:48.997 "raid": { 00:30:48.997 "uuid": "4d96ea8d-77dd-4b20-8c49-ebee6baa4889", 00:30:48.997 "strip_size_kb": 64, 00:30:48.997 "state": "online", 00:30:48.997 "raid_level": "raid5f", 00:30:48.997 "superblock": true, 00:30:48.997 "num_base_bdevs": 4, 00:30:48.997 "num_base_bdevs_discovered": 4, 00:30:48.997 "num_base_bdevs_operational": 4, 00:30:48.997 "base_bdevs_list": [ 00:30:48.997 { 00:30:48.997 "name": "NewBaseBdev", 00:30:48.997 "uuid": "7dae8e88-a586-4220-8563-210c7ec28fab", 00:30:48.997 "is_configured": true, 00:30:48.997 "data_offset": 2048, 00:30:48.997 "data_size": 63488 00:30:48.997 }, 00:30:48.997 { 00:30:48.997 "name": "BaseBdev2", 00:30:48.997 "uuid": "2e7f1287-f140-4116-b347-50a91a8662f0", 00:30:48.997 "is_configured": true, 00:30:48.997 "data_offset": 2048, 00:30:48.997 "data_size": 63488 00:30:48.997 }, 00:30:48.997 { 00:30:48.997 "name": "BaseBdev3", 00:30:48.997 "uuid": "e02b7003-3183-4454-a077-1207fc4cb72e", 00:30:48.997 "is_configured": true, 00:30:48.997 "data_offset": 2048, 00:30:48.997 "data_size": 63488 00:30:48.997 }, 00:30:48.997 { 00:30:48.997 "name": "BaseBdev4", 00:30:48.997 "uuid": "56cb5e46-e659-4714-8011-a8be2accc8b1", 00:30:48.997 "is_configured": true, 00:30:48.997 "data_offset": 2048, 00:30:48.997 "data_size": 63488 00:30:48.997 } 00:30:48.997 ] 00:30:48.997 } 00:30:48.997 } 00:30:48.997 }' 00:30:48.997 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:48.997 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:30:48.997 BaseBdev2 00:30:48.997 BaseBdev3 00:30:48.997 BaseBdev4' 00:30:48.997 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:49.256 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:49.256 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:49.256 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:30:49.256 13:50:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:49.256 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.256 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.256 13:50:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.256 13:50:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.516 13:50:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:49.516 13:50:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:49.516 13:50:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:49.516 13:50:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.516 13:50:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.516 [2024-11-20 13:50:52.190437] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:49.516 [2024-11-20 13:50:52.190654] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:49.516 [2024-11-20 13:50:52.190943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:49.516 [2024-11-20 13:50:52.191506] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:49.516 [2024-11-20 13:50:52.191533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:30:49.516 13:50:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.516 13:50:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84085 00:30:49.516 13:50:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84085 ']' 00:30:49.516 13:50:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 84085 00:30:49.516 13:50:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:30:49.516 13:50:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:49.516 13:50:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84085 00:30:49.516 killing process with pid 84085 00:30:49.516 13:50:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:49.516 13:50:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:49.516 13:50:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84085' 00:30:49.516 13:50:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 84085 00:30:49.516 [2024-11-20 13:50:52.233057] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:49.516 13:50:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 84085 00:30:49.774 [2024-11-20 13:50:52.587304] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:50.710 13:50:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:30:50.710 00:30:50.710 real 0m13.011s 00:30:50.710 user 0m21.512s 00:30:50.710 sys 0m1.995s 00:30:50.710 13:50:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:50.710 13:50:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:50.710 ************************************ 00:30:50.710 END TEST raid5f_state_function_test_sb 00:30:50.710 ************************************ 00:30:50.969 13:50:53 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:30:50.969 13:50:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:50.969 13:50:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:50.969 13:50:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:50.969 ************************************ 00:30:50.969 START TEST raid5f_superblock_test 00:30:50.969 ************************************ 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84766 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84766 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84766 ']' 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:50.969 13:50:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.969 [2024-11-20 13:50:53.790713] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:30:50.969 [2024-11-20 13:50:53.791217] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84766 ] 00:30:51.227 [2024-11-20 13:50:53.977346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.227 [2024-11-20 13:50:54.103750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.486 [2024-11-20 13:50:54.301492] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:51.486 [2024-11-20 13:50:54.301844] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:52.053 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:52.053 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:30:52.053 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:30:52.053 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:52.053 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:30:52.053 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:30:52.053 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:30:52.053 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:52.053 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:52.053 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:52.053 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:30:52.053 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.053 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.053 malloc1 00:30:52.053 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.053 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:52.053 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.053 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.053 [2024-11-20 13:50:54.792573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:52.053 [2024-11-20 13:50:54.792790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:52.053 [2024-11-20 13:50:54.792835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:30:52.053 [2024-11-20 13:50:54.792852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:52.053 [2024-11-20 13:50:54.795703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:52.053 [2024-11-20 13:50:54.795751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:52.053 pt1 00:30:52.053 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.054 malloc2 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.054 [2024-11-20 13:50:54.850117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:52.054 [2024-11-20 13:50:54.850185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:52.054 [2024-11-20 13:50:54.850223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:30:52.054 [2024-11-20 13:50:54.850238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:52.054 [2024-11-20 13:50:54.853203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:52.054 [2024-11-20 13:50:54.853246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:52.054 pt2 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.054 malloc3 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.054 [2024-11-20 13:50:54.914732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:30:52.054 [2024-11-20 13:50:54.914821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:52.054 [2024-11-20 13:50:54.914855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:52.054 [2024-11-20 13:50:54.914870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:52.054 [2024-11-20 13:50:54.917802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:52.054 [2024-11-20 13:50:54.917848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:30:52.054 pt3 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.054 malloc4 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.054 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.313 [2024-11-20 13:50:54.970353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:30:52.313 [2024-11-20 13:50:54.970440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:52.313 [2024-11-20 13:50:54.970472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:30:52.313 [2024-11-20 13:50:54.970486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:52.313 [2024-11-20 13:50:54.973228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:52.313 [2024-11-20 13:50:54.973287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:30:52.313 pt4 00:30:52.314 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.314 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:52.314 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:52.314 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:30:52.314 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.314 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.314 [2024-11-20 13:50:54.982384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:52.314 [2024-11-20 13:50:54.984779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:52.314 [2024-11-20 13:50:54.985097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:52.314 [2024-11-20 13:50:54.985178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:30:52.314 [2024-11-20 13:50:54.985437] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:30:52.314 [2024-11-20 13:50:54.985459] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:52.314 [2024-11-20 13:50:54.985768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:30:52.314 [2024-11-20 13:50:54.992354] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:30:52.314 [2024-11-20 13:50:54.992548] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:30:52.314 [2024-11-20 13:50:54.992812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:52.314 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.314 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:30:52.314 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:52.314 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:52.314 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:52.314 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:52.314 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:52.314 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:52.314 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:52.314 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:52.314 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:52.314 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:52.314 13:50:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:52.314 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.314 13:50:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.314 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.314 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:52.314 "name": "raid_bdev1", 00:30:52.314 "uuid": "a91a0a8c-ffa6-470e-9e81-d31bf6d16c0e", 00:30:52.314 "strip_size_kb": 64, 00:30:52.314 "state": "online", 00:30:52.314 "raid_level": "raid5f", 00:30:52.314 "superblock": true, 00:30:52.314 "num_base_bdevs": 4, 00:30:52.314 "num_base_bdevs_discovered": 4, 00:30:52.314 "num_base_bdevs_operational": 4, 00:30:52.314 "base_bdevs_list": [ 00:30:52.314 { 00:30:52.314 "name": "pt1", 00:30:52.314 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:52.314 "is_configured": true, 00:30:52.314 "data_offset": 2048, 00:30:52.314 "data_size": 63488 00:30:52.314 }, 00:30:52.314 { 00:30:52.314 "name": "pt2", 00:30:52.314 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:52.314 "is_configured": true, 00:30:52.314 "data_offset": 2048, 00:30:52.314 "data_size": 63488 00:30:52.314 }, 00:30:52.314 { 00:30:52.314 "name": "pt3", 00:30:52.314 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:52.314 "is_configured": true, 00:30:52.314 "data_offset": 2048, 00:30:52.314 "data_size": 63488 00:30:52.314 }, 00:30:52.314 { 00:30:52.314 "name": "pt4", 00:30:52.314 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:52.314 "is_configured": true, 00:30:52.314 "data_offset": 2048, 00:30:52.314 "data_size": 63488 00:30:52.314 } 00:30:52.314 ] 00:30:52.314 }' 00:30:52.314 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:52.314 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.881 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:30:52.881 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:30:52.881 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:52.881 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:52.881 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:52.881 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:52.881 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:52.881 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.881 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:52.881 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.881 [2024-11-20 13:50:55.560551] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:52.881 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.881 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:52.881 "name": "raid_bdev1", 00:30:52.881 "aliases": [ 00:30:52.881 "a91a0a8c-ffa6-470e-9e81-d31bf6d16c0e" 00:30:52.881 ], 00:30:52.881 "product_name": "Raid Volume", 00:30:52.881 "block_size": 512, 00:30:52.881 "num_blocks": 190464, 00:30:52.881 "uuid": "a91a0a8c-ffa6-470e-9e81-d31bf6d16c0e", 00:30:52.881 "assigned_rate_limits": { 00:30:52.881 "rw_ios_per_sec": 0, 00:30:52.881 "rw_mbytes_per_sec": 0, 00:30:52.881 "r_mbytes_per_sec": 0, 00:30:52.881 "w_mbytes_per_sec": 0 00:30:52.881 }, 00:30:52.881 "claimed": false, 00:30:52.881 "zoned": false, 00:30:52.881 "supported_io_types": { 00:30:52.881 "read": true, 00:30:52.881 "write": true, 00:30:52.881 "unmap": false, 00:30:52.881 "flush": false, 00:30:52.881 "reset": true, 00:30:52.881 "nvme_admin": false, 00:30:52.881 "nvme_io": false, 00:30:52.881 "nvme_io_md": false, 00:30:52.881 "write_zeroes": true, 00:30:52.881 "zcopy": false, 00:30:52.881 "get_zone_info": false, 00:30:52.881 "zone_management": false, 00:30:52.881 "zone_append": false, 00:30:52.881 "compare": false, 00:30:52.881 "compare_and_write": false, 00:30:52.881 "abort": false, 00:30:52.881 "seek_hole": false, 00:30:52.881 "seek_data": false, 00:30:52.881 "copy": false, 00:30:52.881 "nvme_iov_md": false 00:30:52.881 }, 00:30:52.881 "driver_specific": { 00:30:52.881 "raid": { 00:30:52.881 "uuid": "a91a0a8c-ffa6-470e-9e81-d31bf6d16c0e", 00:30:52.881 "strip_size_kb": 64, 00:30:52.881 "state": "online", 00:30:52.881 "raid_level": "raid5f", 00:30:52.881 "superblock": true, 00:30:52.881 "num_base_bdevs": 4, 00:30:52.881 "num_base_bdevs_discovered": 4, 00:30:52.882 "num_base_bdevs_operational": 4, 00:30:52.882 "base_bdevs_list": [ 00:30:52.882 { 00:30:52.882 "name": "pt1", 00:30:52.882 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:52.882 "is_configured": true, 00:30:52.882 "data_offset": 2048, 00:30:52.882 "data_size": 63488 00:30:52.882 }, 00:30:52.882 { 00:30:52.882 "name": "pt2", 00:30:52.882 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:52.882 "is_configured": true, 00:30:52.882 "data_offset": 2048, 00:30:52.882 "data_size": 63488 00:30:52.882 }, 00:30:52.882 { 00:30:52.882 "name": "pt3", 00:30:52.882 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:52.882 "is_configured": true, 00:30:52.882 "data_offset": 2048, 00:30:52.882 "data_size": 63488 00:30:52.882 }, 00:30:52.882 { 00:30:52.882 "name": "pt4", 00:30:52.882 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:52.882 "is_configured": true, 00:30:52.882 "data_offset": 2048, 00:30:52.882 "data_size": 63488 00:30:52.882 } 00:30:52.882 ] 00:30:52.882 } 00:30:52.882 } 00:30:52.882 }' 00:30:52.882 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:52.882 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:30:52.882 pt2 00:30:52.882 pt3 00:30:52.882 pt4' 00:30:52.882 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:52.882 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:52.882 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:52.882 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:30:52.882 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:52.882 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.882 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.882 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.882 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:52.882 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:52.882 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:52.882 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:30:52.882 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.882 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.882 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:52.882 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.141 [2024-11-20 13:50:55.940685] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a91a0a8c-ffa6-470e-9e81-d31bf6d16c0e 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a91a0a8c-ffa6-470e-9e81-d31bf6d16c0e ']' 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.141 [2024-11-20 13:50:55.984459] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:53.141 [2024-11-20 13:50:55.984625] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:53.141 [2024-11-20 13:50:55.984756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:53.141 [2024-11-20 13:50:55.984868] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:53.141 [2024-11-20 13:50:55.984920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.141 13:50:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:30:53.141 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.141 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:30:53.141 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:30:53.141 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:53.141 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:30:53.141 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.141 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.400 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.400 [2024-11-20 13:50:56.160529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:30:53.400 [2024-11-20 13:50:56.163286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:30:53.400 [2024-11-20 13:50:56.163377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:30:53.400 [2024-11-20 13:50:56.163426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:30:53.400 [2024-11-20 13:50:56.163494] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:30:53.400 [2024-11-20 13:50:56.163580] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:30:53.400 [2024-11-20 13:50:56.163611] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:30:53.400 [2024-11-20 13:50:56.163640] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:30:53.400 [2024-11-20 13:50:56.163691] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:53.400 [2024-11-20 13:50:56.163709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:30:53.400 request: 00:30:53.400 { 00:30:53.400 "name": "raid_bdev1", 00:30:53.400 "raid_level": "raid5f", 00:30:53.400 "base_bdevs": [ 00:30:53.400 "malloc1", 00:30:53.400 "malloc2", 00:30:53.400 "malloc3", 00:30:53.400 "malloc4" 00:30:53.400 ], 00:30:53.400 "strip_size_kb": 64, 00:30:53.400 "superblock": false, 00:30:53.400 "method": "bdev_raid_create", 00:30:53.400 "req_id": 1 00:30:53.400 } 00:30:53.400 Got JSON-RPC error response 00:30:53.401 response: 00:30:53.401 { 00:30:53.401 "code": -17, 00:30:53.401 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:30:53.401 } 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.401 [2024-11-20 13:50:56.224602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:53.401 [2024-11-20 13:50:56.224842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:53.401 [2024-11-20 13:50:56.224943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:30:53.401 [2024-11-20 13:50:56.225194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:53.401 [2024-11-20 13:50:56.228302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:53.401 [2024-11-20 13:50:56.228515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:53.401 [2024-11-20 13:50:56.228621] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:30:53.401 [2024-11-20 13:50:56.228694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:53.401 pt1 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:53.401 "name": "raid_bdev1", 00:30:53.401 "uuid": "a91a0a8c-ffa6-470e-9e81-d31bf6d16c0e", 00:30:53.401 "strip_size_kb": 64, 00:30:53.401 "state": "configuring", 00:30:53.401 "raid_level": "raid5f", 00:30:53.401 "superblock": true, 00:30:53.401 "num_base_bdevs": 4, 00:30:53.401 "num_base_bdevs_discovered": 1, 00:30:53.401 "num_base_bdevs_operational": 4, 00:30:53.401 "base_bdevs_list": [ 00:30:53.401 { 00:30:53.401 "name": "pt1", 00:30:53.401 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:53.401 "is_configured": true, 00:30:53.401 "data_offset": 2048, 00:30:53.401 "data_size": 63488 00:30:53.401 }, 00:30:53.401 { 00:30:53.401 "name": null, 00:30:53.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:53.401 "is_configured": false, 00:30:53.401 "data_offset": 2048, 00:30:53.401 "data_size": 63488 00:30:53.401 }, 00:30:53.401 { 00:30:53.401 "name": null, 00:30:53.401 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:53.401 "is_configured": false, 00:30:53.401 "data_offset": 2048, 00:30:53.401 "data_size": 63488 00:30:53.401 }, 00:30:53.401 { 00:30:53.401 "name": null, 00:30:53.401 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:53.401 "is_configured": false, 00:30:53.401 "data_offset": 2048, 00:30:53.401 "data_size": 63488 00:30:53.401 } 00:30:53.401 ] 00:30:53.401 }' 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:53.401 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.969 [2024-11-20 13:50:56.712885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:53.969 [2024-11-20 13:50:56.713207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:53.969 [2024-11-20 13:50:56.713249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:30:53.969 [2024-11-20 13:50:56.713269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:53.969 [2024-11-20 13:50:56.713925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:53.969 [2024-11-20 13:50:56.714012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:53.969 [2024-11-20 13:50:56.714120] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:53.969 [2024-11-20 13:50:56.714165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:53.969 pt2 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.969 [2024-11-20 13:50:56.720870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:53.969 "name": "raid_bdev1", 00:30:53.969 "uuid": "a91a0a8c-ffa6-470e-9e81-d31bf6d16c0e", 00:30:53.969 "strip_size_kb": 64, 00:30:53.969 "state": "configuring", 00:30:53.969 "raid_level": "raid5f", 00:30:53.969 "superblock": true, 00:30:53.969 "num_base_bdevs": 4, 00:30:53.969 "num_base_bdevs_discovered": 1, 00:30:53.969 "num_base_bdevs_operational": 4, 00:30:53.969 "base_bdevs_list": [ 00:30:53.969 { 00:30:53.969 "name": "pt1", 00:30:53.969 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:53.969 "is_configured": true, 00:30:53.969 "data_offset": 2048, 00:30:53.969 "data_size": 63488 00:30:53.969 }, 00:30:53.969 { 00:30:53.969 "name": null, 00:30:53.969 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:53.969 "is_configured": false, 00:30:53.969 "data_offset": 0, 00:30:53.969 "data_size": 63488 00:30:53.969 }, 00:30:53.969 { 00:30:53.969 "name": null, 00:30:53.969 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:53.969 "is_configured": false, 00:30:53.969 "data_offset": 2048, 00:30:53.969 "data_size": 63488 00:30:53.969 }, 00:30:53.969 { 00:30:53.969 "name": null, 00:30:53.969 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:53.969 "is_configured": false, 00:30:53.969 "data_offset": 2048, 00:30:53.969 "data_size": 63488 00:30:53.969 } 00:30:53.969 ] 00:30:53.969 }' 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:53.969 13:50:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:54.537 [2024-11-20 13:50:57.257080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:54.537 [2024-11-20 13:50:57.257307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:54.537 [2024-11-20 13:50:57.257352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:30:54.537 [2024-11-20 13:50:57.257369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:54.537 [2024-11-20 13:50:57.258038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:54.537 [2024-11-20 13:50:57.258065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:54.537 [2024-11-20 13:50:57.258186] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:54.537 [2024-11-20 13:50:57.258218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:54.537 pt2 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:54.537 [2024-11-20 13:50:57.269045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:30:54.537 [2024-11-20 13:50:57.269103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:54.537 [2024-11-20 13:50:57.269138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:30:54.537 [2024-11-20 13:50:57.269154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:54.537 [2024-11-20 13:50:57.269590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:54.537 [2024-11-20 13:50:57.269620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:30:54.537 [2024-11-20 13:50:57.269729] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:30:54.537 [2024-11-20 13:50:57.269791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:54.537 pt3 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:54.537 [2024-11-20 13:50:57.281025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:30:54.537 [2024-11-20 13:50:57.281104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:54.537 [2024-11-20 13:50:57.281133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:30:54.537 [2024-11-20 13:50:57.281147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:54.537 [2024-11-20 13:50:57.281655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:54.537 [2024-11-20 13:50:57.281686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:30:54.537 [2024-11-20 13:50:57.281776] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:30:54.537 [2024-11-20 13:50:57.281825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:30:54.537 [2024-11-20 13:50:57.282055] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:30:54.537 [2024-11-20 13:50:57.282090] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:54.537 [2024-11-20 13:50:57.282419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:30:54.537 [2024-11-20 13:50:57.288687] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:30:54.537 [2024-11-20 13:50:57.288888] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:30:54.537 [2024-11-20 13:50:57.289157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:54.537 pt4 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:54.537 "name": "raid_bdev1", 00:30:54.537 "uuid": "a91a0a8c-ffa6-470e-9e81-d31bf6d16c0e", 00:30:54.537 "strip_size_kb": 64, 00:30:54.537 "state": "online", 00:30:54.537 "raid_level": "raid5f", 00:30:54.537 "superblock": true, 00:30:54.537 "num_base_bdevs": 4, 00:30:54.537 "num_base_bdevs_discovered": 4, 00:30:54.537 "num_base_bdevs_operational": 4, 00:30:54.537 "base_bdevs_list": [ 00:30:54.537 { 00:30:54.537 "name": "pt1", 00:30:54.537 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:54.537 "is_configured": true, 00:30:54.537 "data_offset": 2048, 00:30:54.537 "data_size": 63488 00:30:54.537 }, 00:30:54.537 { 00:30:54.537 "name": "pt2", 00:30:54.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:54.537 "is_configured": true, 00:30:54.537 "data_offset": 2048, 00:30:54.537 "data_size": 63488 00:30:54.537 }, 00:30:54.537 { 00:30:54.537 "name": "pt3", 00:30:54.537 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:54.537 "is_configured": true, 00:30:54.537 "data_offset": 2048, 00:30:54.537 "data_size": 63488 00:30:54.537 }, 00:30:54.537 { 00:30:54.537 "name": "pt4", 00:30:54.537 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:54.537 "is_configured": true, 00:30:54.537 "data_offset": 2048, 00:30:54.537 "data_size": 63488 00:30:54.537 } 00:30:54.537 ] 00:30:54.537 }' 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:54.537 13:50:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.107 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:30:55.107 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:30:55.107 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:55.107 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:55.107 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:55.107 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:55.107 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:55.107 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:55.107 13:50:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.107 13:50:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.107 [2024-11-20 13:50:57.864954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:55.107 13:50:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.107 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:55.107 "name": "raid_bdev1", 00:30:55.107 "aliases": [ 00:30:55.107 "a91a0a8c-ffa6-470e-9e81-d31bf6d16c0e" 00:30:55.107 ], 00:30:55.107 "product_name": "Raid Volume", 00:30:55.107 "block_size": 512, 00:30:55.107 "num_blocks": 190464, 00:30:55.107 "uuid": "a91a0a8c-ffa6-470e-9e81-d31bf6d16c0e", 00:30:55.107 "assigned_rate_limits": { 00:30:55.107 "rw_ios_per_sec": 0, 00:30:55.107 "rw_mbytes_per_sec": 0, 00:30:55.107 "r_mbytes_per_sec": 0, 00:30:55.107 "w_mbytes_per_sec": 0 00:30:55.107 }, 00:30:55.107 "claimed": false, 00:30:55.107 "zoned": false, 00:30:55.107 "supported_io_types": { 00:30:55.107 "read": true, 00:30:55.107 "write": true, 00:30:55.107 "unmap": false, 00:30:55.107 "flush": false, 00:30:55.107 "reset": true, 00:30:55.107 "nvme_admin": false, 00:30:55.107 "nvme_io": false, 00:30:55.107 "nvme_io_md": false, 00:30:55.107 "write_zeroes": true, 00:30:55.107 "zcopy": false, 00:30:55.107 "get_zone_info": false, 00:30:55.107 "zone_management": false, 00:30:55.107 "zone_append": false, 00:30:55.107 "compare": false, 00:30:55.107 "compare_and_write": false, 00:30:55.107 "abort": false, 00:30:55.107 "seek_hole": false, 00:30:55.107 "seek_data": false, 00:30:55.107 "copy": false, 00:30:55.107 "nvme_iov_md": false 00:30:55.107 }, 00:30:55.107 "driver_specific": { 00:30:55.107 "raid": { 00:30:55.107 "uuid": "a91a0a8c-ffa6-470e-9e81-d31bf6d16c0e", 00:30:55.107 "strip_size_kb": 64, 00:30:55.107 "state": "online", 00:30:55.107 "raid_level": "raid5f", 00:30:55.107 "superblock": true, 00:30:55.107 "num_base_bdevs": 4, 00:30:55.107 "num_base_bdevs_discovered": 4, 00:30:55.107 "num_base_bdevs_operational": 4, 00:30:55.107 "base_bdevs_list": [ 00:30:55.107 { 00:30:55.107 "name": "pt1", 00:30:55.107 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:55.107 "is_configured": true, 00:30:55.107 "data_offset": 2048, 00:30:55.107 "data_size": 63488 00:30:55.107 }, 00:30:55.107 { 00:30:55.107 "name": "pt2", 00:30:55.107 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:55.107 "is_configured": true, 00:30:55.107 "data_offset": 2048, 00:30:55.107 "data_size": 63488 00:30:55.107 }, 00:30:55.107 { 00:30:55.107 "name": "pt3", 00:30:55.107 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:55.107 "is_configured": true, 00:30:55.107 "data_offset": 2048, 00:30:55.107 "data_size": 63488 00:30:55.107 }, 00:30:55.107 { 00:30:55.107 "name": "pt4", 00:30:55.107 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:55.107 "is_configured": true, 00:30:55.107 "data_offset": 2048, 00:30:55.107 "data_size": 63488 00:30:55.107 } 00:30:55.107 ] 00:30:55.107 } 00:30:55.107 } 00:30:55.107 }' 00:30:55.107 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:55.107 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:30:55.107 pt2 00:30:55.107 pt3 00:30:55.107 pt4' 00:30:55.107 13:50:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:55.107 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:55.107 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:55.107 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:55.107 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:30:55.107 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.107 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.376 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.376 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:55.376 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:55.376 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:55.376 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:55.376 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:30:55.376 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.376 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.376 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.376 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:55.376 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:55.376 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:55.376 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:30:55.376 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.376 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.376 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:55.376 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.376 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:55.376 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:55.376 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:55.376 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:30:55.376 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.376 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.377 [2024-11-20 13:50:58.216962] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a91a0a8c-ffa6-470e-9e81-d31bf6d16c0e '!=' a91a0a8c-ffa6-470e-9e81-d31bf6d16c0e ']' 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.377 [2024-11-20 13:50:58.264815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.377 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:55.636 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.636 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:55.636 "name": "raid_bdev1", 00:30:55.636 "uuid": "a91a0a8c-ffa6-470e-9e81-d31bf6d16c0e", 00:30:55.636 "strip_size_kb": 64, 00:30:55.636 "state": "online", 00:30:55.636 "raid_level": "raid5f", 00:30:55.636 "superblock": true, 00:30:55.636 "num_base_bdevs": 4, 00:30:55.636 "num_base_bdevs_discovered": 3, 00:30:55.636 "num_base_bdevs_operational": 3, 00:30:55.636 "base_bdevs_list": [ 00:30:55.636 { 00:30:55.636 "name": null, 00:30:55.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:55.636 "is_configured": false, 00:30:55.636 "data_offset": 0, 00:30:55.636 "data_size": 63488 00:30:55.636 }, 00:30:55.636 { 00:30:55.636 "name": "pt2", 00:30:55.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:55.636 "is_configured": true, 00:30:55.636 "data_offset": 2048, 00:30:55.636 "data_size": 63488 00:30:55.636 }, 00:30:55.636 { 00:30:55.636 "name": "pt3", 00:30:55.636 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:55.636 "is_configured": true, 00:30:55.636 "data_offset": 2048, 00:30:55.636 "data_size": 63488 00:30:55.636 }, 00:30:55.636 { 00:30:55.636 "name": "pt4", 00:30:55.636 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:55.636 "is_configured": true, 00:30:55.636 "data_offset": 2048, 00:30:55.636 "data_size": 63488 00:30:55.636 } 00:30:55.636 ] 00:30:55.636 }' 00:30:55.636 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:55.636 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.894 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:55.894 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.894 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.894 [2024-11-20 13:50:58.780998] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:55.894 [2024-11-20 13:50:58.781048] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:55.894 [2024-11-20 13:50:58.781161] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:55.894 [2024-11-20 13:50:58.781270] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:55.894 [2024-11-20 13:50:58.781287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:30:55.894 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.894 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:55.894 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:30:55.894 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.894 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.894 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.154 [2024-11-20 13:50:58.864924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:56.154 [2024-11-20 13:50:58.864989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:56.154 [2024-11-20 13:50:58.865020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:30:56.154 [2024-11-20 13:50:58.865035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:56.154 [2024-11-20 13:50:58.868137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:56.154 [2024-11-20 13:50:58.868189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:56.154 [2024-11-20 13:50:58.868306] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:56.154 [2024-11-20 13:50:58.868368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:56.154 pt2 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:56.154 "name": "raid_bdev1", 00:30:56.154 "uuid": "a91a0a8c-ffa6-470e-9e81-d31bf6d16c0e", 00:30:56.154 "strip_size_kb": 64, 00:30:56.154 "state": "configuring", 00:30:56.154 "raid_level": "raid5f", 00:30:56.154 "superblock": true, 00:30:56.154 "num_base_bdevs": 4, 00:30:56.154 "num_base_bdevs_discovered": 1, 00:30:56.154 "num_base_bdevs_operational": 3, 00:30:56.154 "base_bdevs_list": [ 00:30:56.154 { 00:30:56.154 "name": null, 00:30:56.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:56.154 "is_configured": false, 00:30:56.154 "data_offset": 2048, 00:30:56.154 "data_size": 63488 00:30:56.154 }, 00:30:56.154 { 00:30:56.154 "name": "pt2", 00:30:56.154 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:56.154 "is_configured": true, 00:30:56.154 "data_offset": 2048, 00:30:56.154 "data_size": 63488 00:30:56.154 }, 00:30:56.154 { 00:30:56.154 "name": null, 00:30:56.154 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:56.154 "is_configured": false, 00:30:56.154 "data_offset": 2048, 00:30:56.154 "data_size": 63488 00:30:56.154 }, 00:30:56.154 { 00:30:56.154 "name": null, 00:30:56.154 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:56.154 "is_configured": false, 00:30:56.154 "data_offset": 2048, 00:30:56.154 "data_size": 63488 00:30:56.154 } 00:30:56.154 ] 00:30:56.154 }' 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:56.154 13:50:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.723 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:30:56.723 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:30:56.723 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:30:56.723 13:50:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.723 13:50:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.723 [2024-11-20 13:50:59.373089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:30:56.723 [2024-11-20 13:50:59.373199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:56.723 [2024-11-20 13:50:59.373260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:30:56.723 [2024-11-20 13:50:59.373275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:56.723 [2024-11-20 13:50:59.373879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:56.723 [2024-11-20 13:50:59.373926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:30:56.723 [2024-11-20 13:50:59.374050] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:30:56.723 [2024-11-20 13:50:59.374084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:56.723 pt3 00:30:56.723 13:50:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.723 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:30:56.723 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:56.723 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:56.723 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:56.723 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:56.723 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:56.723 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:56.723 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:56.723 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:56.723 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:56.723 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:56.723 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:56.723 13:50:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.723 13:50:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.723 13:50:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.723 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:56.723 "name": "raid_bdev1", 00:30:56.723 "uuid": "a91a0a8c-ffa6-470e-9e81-d31bf6d16c0e", 00:30:56.723 "strip_size_kb": 64, 00:30:56.723 "state": "configuring", 00:30:56.723 "raid_level": "raid5f", 00:30:56.723 "superblock": true, 00:30:56.723 "num_base_bdevs": 4, 00:30:56.723 "num_base_bdevs_discovered": 2, 00:30:56.723 "num_base_bdevs_operational": 3, 00:30:56.723 "base_bdevs_list": [ 00:30:56.723 { 00:30:56.723 "name": null, 00:30:56.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:56.723 "is_configured": false, 00:30:56.723 "data_offset": 2048, 00:30:56.723 "data_size": 63488 00:30:56.723 }, 00:30:56.723 { 00:30:56.723 "name": "pt2", 00:30:56.723 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:56.723 "is_configured": true, 00:30:56.723 "data_offset": 2048, 00:30:56.723 "data_size": 63488 00:30:56.723 }, 00:30:56.723 { 00:30:56.723 "name": "pt3", 00:30:56.723 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:56.723 "is_configured": true, 00:30:56.723 "data_offset": 2048, 00:30:56.723 "data_size": 63488 00:30:56.723 }, 00:30:56.723 { 00:30:56.723 "name": null, 00:30:56.723 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:56.723 "is_configured": false, 00:30:56.723 "data_offset": 2048, 00:30:56.723 "data_size": 63488 00:30:56.723 } 00:30:56.724 ] 00:30:56.724 }' 00:30:56.724 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:56.724 13:50:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.982 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:30:56.982 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:30:56.982 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:30:56.982 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:30:56.982 13:50:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.982 13:50:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.982 [2024-11-20 13:50:59.889275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:30:56.982 [2024-11-20 13:50:59.889363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:56.982 [2024-11-20 13:50:59.889400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:30:56.982 [2024-11-20 13:50:59.889416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:56.982 [2024-11-20 13:50:59.890064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:56.982 [2024-11-20 13:50:59.890091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:30:56.982 [2024-11-20 13:50:59.890196] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:30:56.982 [2024-11-20 13:50:59.890237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:30:56.982 [2024-11-20 13:50:59.890419] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:30:56.982 [2024-11-20 13:50:59.890435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:56.982 [2024-11-20 13:50:59.890751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:30:57.241 [2024-11-20 13:50:59.897609] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:30:57.241 [2024-11-20 13:50:59.897643] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:30:57.241 [2024-11-20 13:50:59.898039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:57.241 pt4 00:30:57.241 13:50:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.241 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:30:57.241 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:57.241 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:57.241 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:57.241 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:57.241 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:57.241 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:57.241 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:57.241 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:57.241 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:57.241 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:57.241 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:57.241 13:50:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.241 13:50:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.241 13:50:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.241 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:57.241 "name": "raid_bdev1", 00:30:57.241 "uuid": "a91a0a8c-ffa6-470e-9e81-d31bf6d16c0e", 00:30:57.241 "strip_size_kb": 64, 00:30:57.241 "state": "online", 00:30:57.241 "raid_level": "raid5f", 00:30:57.241 "superblock": true, 00:30:57.241 "num_base_bdevs": 4, 00:30:57.241 "num_base_bdevs_discovered": 3, 00:30:57.241 "num_base_bdevs_operational": 3, 00:30:57.241 "base_bdevs_list": [ 00:30:57.241 { 00:30:57.241 "name": null, 00:30:57.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:57.241 "is_configured": false, 00:30:57.241 "data_offset": 2048, 00:30:57.241 "data_size": 63488 00:30:57.241 }, 00:30:57.241 { 00:30:57.241 "name": "pt2", 00:30:57.241 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:57.241 "is_configured": true, 00:30:57.241 "data_offset": 2048, 00:30:57.241 "data_size": 63488 00:30:57.241 }, 00:30:57.241 { 00:30:57.241 "name": "pt3", 00:30:57.241 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:57.241 "is_configured": true, 00:30:57.241 "data_offset": 2048, 00:30:57.241 "data_size": 63488 00:30:57.241 }, 00:30:57.241 { 00:30:57.241 "name": "pt4", 00:30:57.241 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:57.241 "is_configured": true, 00:30:57.241 "data_offset": 2048, 00:30:57.241 "data_size": 63488 00:30:57.241 } 00:30:57.241 ] 00:30:57.241 }' 00:30:57.241 13:50:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:57.241 13:50:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.499 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:57.499 13:51:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.499 13:51:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.499 [2024-11-20 13:51:00.413716] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:57.499 [2024-11-20 13:51:00.413757] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:57.499 [2024-11-20 13:51:00.413858] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:57.499 [2024-11-20 13:51:00.413976] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:57.499 [2024-11-20 13:51:00.414000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.759 [2024-11-20 13:51:00.477704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:57.759 [2024-11-20 13:51:00.477789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:57.759 [2024-11-20 13:51:00.477835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:30:57.759 [2024-11-20 13:51:00.477857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:57.759 [2024-11-20 13:51:00.480796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:57.759 [2024-11-20 13:51:00.480854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:57.759 [2024-11-20 13:51:00.480973] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:30:57.759 [2024-11-20 13:51:00.481048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:57.759 [2024-11-20 13:51:00.481211] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:30:57.759 [2024-11-20 13:51:00.481234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:57.759 [2024-11-20 13:51:00.481261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:30:57.759 [2024-11-20 13:51:00.481335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:57.759 [2024-11-20 13:51:00.481477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:57.759 pt1 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:57.759 "name": "raid_bdev1", 00:30:57.759 "uuid": "a91a0a8c-ffa6-470e-9e81-d31bf6d16c0e", 00:30:57.759 "strip_size_kb": 64, 00:30:57.759 "state": "configuring", 00:30:57.759 "raid_level": "raid5f", 00:30:57.759 "superblock": true, 00:30:57.759 "num_base_bdevs": 4, 00:30:57.759 "num_base_bdevs_discovered": 2, 00:30:57.759 "num_base_bdevs_operational": 3, 00:30:57.759 "base_bdevs_list": [ 00:30:57.759 { 00:30:57.759 "name": null, 00:30:57.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:57.759 "is_configured": false, 00:30:57.759 "data_offset": 2048, 00:30:57.759 "data_size": 63488 00:30:57.759 }, 00:30:57.759 { 00:30:57.759 "name": "pt2", 00:30:57.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:57.759 "is_configured": true, 00:30:57.759 "data_offset": 2048, 00:30:57.759 "data_size": 63488 00:30:57.759 }, 00:30:57.759 { 00:30:57.759 "name": "pt3", 00:30:57.759 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:57.759 "is_configured": true, 00:30:57.759 "data_offset": 2048, 00:30:57.759 "data_size": 63488 00:30:57.759 }, 00:30:57.759 { 00:30:57.759 "name": null, 00:30:57.759 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:57.759 "is_configured": false, 00:30:57.759 "data_offset": 2048, 00:30:57.759 "data_size": 63488 00:30:57.759 } 00:30:57.759 ] 00:30:57.759 }' 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:57.759 13:51:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.328 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:30:58.328 13:51:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:30:58.328 13:51:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.328 13:51:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.328 [2024-11-20 13:51:01.045887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:30:58.328 [2024-11-20 13:51:01.045980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:58.328 [2024-11-20 13:51:01.046017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:30:58.328 [2024-11-20 13:51:01.046033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:58.328 [2024-11-20 13:51:01.046597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:58.328 [2024-11-20 13:51:01.046624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:30:58.328 [2024-11-20 13:51:01.046728] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:30:58.328 [2024-11-20 13:51:01.046762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:30:58.328 [2024-11-20 13:51:01.046962] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:30:58.328 [2024-11-20 13:51:01.046979] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:58.328 [2024-11-20 13:51:01.047291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:30:58.328 [2024-11-20 13:51:01.053764] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:30:58.328 [2024-11-20 13:51:01.053802] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:30:58.328 [2024-11-20 13:51:01.054159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:58.328 pt4 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:58.328 "name": "raid_bdev1", 00:30:58.328 "uuid": "a91a0a8c-ffa6-470e-9e81-d31bf6d16c0e", 00:30:58.328 "strip_size_kb": 64, 00:30:58.328 "state": "online", 00:30:58.328 "raid_level": "raid5f", 00:30:58.328 "superblock": true, 00:30:58.328 "num_base_bdevs": 4, 00:30:58.328 "num_base_bdevs_discovered": 3, 00:30:58.328 "num_base_bdevs_operational": 3, 00:30:58.328 "base_bdevs_list": [ 00:30:58.328 { 00:30:58.328 "name": null, 00:30:58.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:58.328 "is_configured": false, 00:30:58.328 "data_offset": 2048, 00:30:58.328 "data_size": 63488 00:30:58.328 }, 00:30:58.328 { 00:30:58.328 "name": "pt2", 00:30:58.328 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:58.328 "is_configured": true, 00:30:58.328 "data_offset": 2048, 00:30:58.328 "data_size": 63488 00:30:58.328 }, 00:30:58.328 { 00:30:58.328 "name": "pt3", 00:30:58.328 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:58.328 "is_configured": true, 00:30:58.328 "data_offset": 2048, 00:30:58.328 "data_size": 63488 00:30:58.328 }, 00:30:58.328 { 00:30:58.328 "name": "pt4", 00:30:58.328 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:58.328 "is_configured": true, 00:30:58.328 "data_offset": 2048, 00:30:58.328 "data_size": 63488 00:30:58.328 } 00:30:58.328 ] 00:30:58.328 }' 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:58.328 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.895 13:51:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:30:58.895 13:51:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:30:58.895 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.895 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.895 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.895 13:51:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:30:58.895 13:51:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:58.895 13:51:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:30:58.895 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.895 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.895 [2024-11-20 13:51:01.621825] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:58.895 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.895 13:51:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a91a0a8c-ffa6-470e-9e81-d31bf6d16c0e '!=' a91a0a8c-ffa6-470e-9e81-d31bf6d16c0e ']' 00:30:58.895 13:51:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84766 00:30:58.895 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84766 ']' 00:30:58.895 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84766 00:30:58.895 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:30:58.895 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:58.895 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84766 00:30:58.895 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:58.895 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:58.895 killing process with pid 84766 00:30:58.895 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84766' 00:30:58.895 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84766 00:30:58.895 [2024-11-20 13:51:01.704642] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:58.895 13:51:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84766 00:30:58.895 [2024-11-20 13:51:01.704753] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:58.895 [2024-11-20 13:51:01.704854] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:58.896 [2024-11-20 13:51:01.704875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:30:59.154 [2024-11-20 13:51:02.056911] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:00.530 13:51:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:31:00.530 00:31:00.530 real 0m9.438s 00:31:00.530 user 0m15.459s 00:31:00.530 sys 0m1.398s 00:31:00.530 13:51:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:00.530 ************************************ 00:31:00.530 END TEST raid5f_superblock_test 00:31:00.530 ************************************ 00:31:00.530 13:51:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.530 13:51:03 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:31:00.530 13:51:03 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:31:00.530 13:51:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:31:00.530 13:51:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:00.530 13:51:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:00.530 ************************************ 00:31:00.530 START TEST raid5f_rebuild_test 00:31:00.530 ************************************ 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85257 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85257 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85257 ']' 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:00.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:00.530 13:51:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.530 [2024-11-20 13:51:03.294806] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:31:00.530 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:00.530 Zero copy mechanism will not be used. 00:31:00.530 [2024-11-20 13:51:03.295028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85257 ] 00:31:00.789 [2024-11-20 13:51:03.488976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.789 [2024-11-20 13:51:03.643803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.048 [2024-11-20 13:51:03.879205] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:01.048 [2024-11-20 13:51:03.879293] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:01.615 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:01.615 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.616 BaseBdev1_malloc 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.616 [2024-11-20 13:51:04.308206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:01.616 [2024-11-20 13:51:04.308275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:01.616 [2024-11-20 13:51:04.308307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:01.616 [2024-11-20 13:51:04.308327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:01.616 [2024-11-20 13:51:04.311038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:01.616 [2024-11-20 13:51:04.311084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:01.616 BaseBdev1 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.616 BaseBdev2_malloc 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.616 [2024-11-20 13:51:04.355819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:01.616 [2024-11-20 13:51:04.355889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:01.616 [2024-11-20 13:51:04.355942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:31:01.616 [2024-11-20 13:51:04.355961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:01.616 [2024-11-20 13:51:04.358661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:01.616 [2024-11-20 13:51:04.358703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:01.616 BaseBdev2 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.616 BaseBdev3_malloc 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.616 [2024-11-20 13:51:04.415647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:31:01.616 [2024-11-20 13:51:04.415737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:01.616 [2024-11-20 13:51:04.415768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:01.616 [2024-11-20 13:51:04.415788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:01.616 [2024-11-20 13:51:04.418546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:01.616 [2024-11-20 13:51:04.418590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:01.616 BaseBdev3 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.616 BaseBdev4_malloc 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.616 [2024-11-20 13:51:04.467800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:31:01.616 [2024-11-20 13:51:04.467870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:01.616 [2024-11-20 13:51:04.467915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:31:01.616 [2024-11-20 13:51:04.467945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:01.616 [2024-11-20 13:51:04.470649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:01.616 [2024-11-20 13:51:04.470696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:31:01.616 BaseBdev4 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.616 spare_malloc 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.616 spare_delay 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.616 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.616 [2024-11-20 13:51:04.527993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:01.616 [2024-11-20 13:51:04.528054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:01.616 [2024-11-20 13:51:04.528082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:31:01.616 [2024-11-20 13:51:04.528100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:01.875 [2024-11-20 13:51:04.530822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:01.875 [2024-11-20 13:51:04.530866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:01.875 spare 00:31:01.875 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.875 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:31:01.875 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.875 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.875 [2024-11-20 13:51:04.536037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:01.875 [2024-11-20 13:51:04.538478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:01.875 [2024-11-20 13:51:04.538573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:01.875 [2024-11-20 13:51:04.538653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:01.875 [2024-11-20 13:51:04.538778] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:01.875 [2024-11-20 13:51:04.538798] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:31:01.875 [2024-11-20 13:51:04.539137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:31:01.875 [2024-11-20 13:51:04.545950] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:01.875 [2024-11-20 13:51:04.545979] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:31:01.875 [2024-11-20 13:51:04.546230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:01.875 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.875 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:31:01.875 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:01.875 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:01.875 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:01.875 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:01.875 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:01.875 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:01.875 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:01.875 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:01.875 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:01.875 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:01.875 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.875 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.875 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:01.875 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.875 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:01.875 "name": "raid_bdev1", 00:31:01.875 "uuid": "caa754fa-f118-43ee-9f52-34317c666595", 00:31:01.875 "strip_size_kb": 64, 00:31:01.875 "state": "online", 00:31:01.875 "raid_level": "raid5f", 00:31:01.875 "superblock": false, 00:31:01.875 "num_base_bdevs": 4, 00:31:01.875 "num_base_bdevs_discovered": 4, 00:31:01.875 "num_base_bdevs_operational": 4, 00:31:01.875 "base_bdevs_list": [ 00:31:01.875 { 00:31:01.875 "name": "BaseBdev1", 00:31:01.875 "uuid": "e2d7e803-1763-5bd4-9b1f-8eb9229a5df6", 00:31:01.875 "is_configured": true, 00:31:01.875 "data_offset": 0, 00:31:01.875 "data_size": 65536 00:31:01.875 }, 00:31:01.875 { 00:31:01.875 "name": "BaseBdev2", 00:31:01.875 "uuid": "53c27802-2acc-57d7-a825-f6bf1b6e4c3f", 00:31:01.875 "is_configured": true, 00:31:01.875 "data_offset": 0, 00:31:01.875 "data_size": 65536 00:31:01.875 }, 00:31:01.875 { 00:31:01.875 "name": "BaseBdev3", 00:31:01.875 "uuid": "2694c93b-a7d8-5364-ac9d-8b6e6a692add", 00:31:01.875 "is_configured": true, 00:31:01.875 "data_offset": 0, 00:31:01.875 "data_size": 65536 00:31:01.875 }, 00:31:01.875 { 00:31:01.875 "name": "BaseBdev4", 00:31:01.875 "uuid": "a7dfbc78-d687-5720-89f0-3af332e9981f", 00:31:01.875 "is_configured": true, 00:31:01.875 "data_offset": 0, 00:31:01.875 "data_size": 65536 00:31:01.875 } 00:31:01.875 ] 00:31:01.875 }' 00:31:01.875 13:51:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:01.875 13:51:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.459 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:02.459 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:31:02.459 13:51:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.459 13:51:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.459 [2024-11-20 13:51:05.070062] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:02.459 13:51:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.459 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:31:02.459 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:02.459 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:02.459 13:51:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.459 13:51:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.459 13:51:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.459 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:31:02.459 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:31:02.459 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:31:02.459 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:31:02.459 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:31:02.459 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:02.459 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:02.460 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:02.460 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:02.460 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:02.460 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:31:02.460 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:02.460 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:02.460 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:02.718 [2024-11-20 13:51:05.457953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:31:02.718 /dev/nbd0 00:31:02.718 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:02.718 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:02.718 13:51:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:02.718 13:51:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:31:02.718 13:51:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:02.718 13:51:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:02.718 13:51:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:02.718 13:51:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:31:02.718 13:51:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:02.718 13:51:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:02.718 13:51:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:02.718 1+0 records in 00:31:02.718 1+0 records out 00:31:02.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315638 s, 13.0 MB/s 00:31:02.718 13:51:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:02.718 13:51:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:31:02.718 13:51:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:02.718 13:51:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:02.718 13:51:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:31:02.718 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:02.718 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:02.718 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:31:02.718 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:31:02.718 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:31:02.718 13:51:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:31:03.653 512+0 records in 00:31:03.653 512+0 records out 00:31:03.653 100663296 bytes (101 MB, 96 MiB) copied, 0.679881 s, 148 MB/s 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:31:03.653 [2024-11-20 13:51:06.525553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.653 [2024-11-20 13:51:06.545193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:03.653 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:03.654 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:03.654 13:51:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.654 13:51:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:03.912 13:51:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.912 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:03.912 "name": "raid_bdev1", 00:31:03.912 "uuid": "caa754fa-f118-43ee-9f52-34317c666595", 00:31:03.912 "strip_size_kb": 64, 00:31:03.912 "state": "online", 00:31:03.912 "raid_level": "raid5f", 00:31:03.912 "superblock": false, 00:31:03.912 "num_base_bdevs": 4, 00:31:03.912 "num_base_bdevs_discovered": 3, 00:31:03.912 "num_base_bdevs_operational": 3, 00:31:03.912 "base_bdevs_list": [ 00:31:03.912 { 00:31:03.912 "name": null, 00:31:03.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.912 "is_configured": false, 00:31:03.912 "data_offset": 0, 00:31:03.912 "data_size": 65536 00:31:03.912 }, 00:31:03.912 { 00:31:03.912 "name": "BaseBdev2", 00:31:03.912 "uuid": "53c27802-2acc-57d7-a825-f6bf1b6e4c3f", 00:31:03.912 "is_configured": true, 00:31:03.912 "data_offset": 0, 00:31:03.912 "data_size": 65536 00:31:03.912 }, 00:31:03.912 { 00:31:03.912 "name": "BaseBdev3", 00:31:03.912 "uuid": "2694c93b-a7d8-5364-ac9d-8b6e6a692add", 00:31:03.912 "is_configured": true, 00:31:03.912 "data_offset": 0, 00:31:03.912 "data_size": 65536 00:31:03.912 }, 00:31:03.912 { 00:31:03.912 "name": "BaseBdev4", 00:31:03.912 "uuid": "a7dfbc78-d687-5720-89f0-3af332e9981f", 00:31:03.912 "is_configured": true, 00:31:03.912 "data_offset": 0, 00:31:03.912 "data_size": 65536 00:31:03.912 } 00:31:03.912 ] 00:31:03.912 }' 00:31:03.912 13:51:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:03.912 13:51:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.171 13:51:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:04.171 13:51:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.171 13:51:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.171 [2024-11-20 13:51:07.065356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:04.171 [2024-11-20 13:51:07.079846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:31:04.171 13:51:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.171 13:51:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:31:04.429 [2024-11-20 13:51:07.088982] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:05.378 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:05.378 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:05.378 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:05.378 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:05.378 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:05.378 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:05.378 13:51:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.378 13:51:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.378 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:05.378 13:51:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.378 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:05.378 "name": "raid_bdev1", 00:31:05.378 "uuid": "caa754fa-f118-43ee-9f52-34317c666595", 00:31:05.378 "strip_size_kb": 64, 00:31:05.378 "state": "online", 00:31:05.378 "raid_level": "raid5f", 00:31:05.378 "superblock": false, 00:31:05.378 "num_base_bdevs": 4, 00:31:05.378 "num_base_bdevs_discovered": 4, 00:31:05.378 "num_base_bdevs_operational": 4, 00:31:05.378 "process": { 00:31:05.378 "type": "rebuild", 00:31:05.378 "target": "spare", 00:31:05.378 "progress": { 00:31:05.378 "blocks": 17280, 00:31:05.378 "percent": 8 00:31:05.378 } 00:31:05.378 }, 00:31:05.378 "base_bdevs_list": [ 00:31:05.378 { 00:31:05.378 "name": "spare", 00:31:05.378 "uuid": "421e389f-29af-57d9-b543-4bac270de85c", 00:31:05.378 "is_configured": true, 00:31:05.378 "data_offset": 0, 00:31:05.378 "data_size": 65536 00:31:05.378 }, 00:31:05.378 { 00:31:05.378 "name": "BaseBdev2", 00:31:05.378 "uuid": "53c27802-2acc-57d7-a825-f6bf1b6e4c3f", 00:31:05.378 "is_configured": true, 00:31:05.378 "data_offset": 0, 00:31:05.378 "data_size": 65536 00:31:05.378 }, 00:31:05.378 { 00:31:05.378 "name": "BaseBdev3", 00:31:05.378 "uuid": "2694c93b-a7d8-5364-ac9d-8b6e6a692add", 00:31:05.378 "is_configured": true, 00:31:05.378 "data_offset": 0, 00:31:05.378 "data_size": 65536 00:31:05.378 }, 00:31:05.378 { 00:31:05.378 "name": "BaseBdev4", 00:31:05.378 "uuid": "a7dfbc78-d687-5720-89f0-3af332e9981f", 00:31:05.378 "is_configured": true, 00:31:05.378 "data_offset": 0, 00:31:05.378 "data_size": 65536 00:31:05.378 } 00:31:05.378 ] 00:31:05.378 }' 00:31:05.378 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:05.378 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:05.378 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:05.378 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:05.378 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:31:05.378 13:51:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.378 13:51:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.378 [2024-11-20 13:51:08.250321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:05.636 [2024-11-20 13:51:08.301165] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:05.636 [2024-11-20 13:51:08.301245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:05.636 [2024-11-20 13:51:08.301270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:05.636 [2024-11-20 13:51:08.301288] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:05.636 13:51:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.636 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:05.636 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:05.636 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:05.636 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:05.636 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:05.636 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:05.636 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:05.636 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:05.636 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:05.636 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:05.636 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:05.636 13:51:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.636 13:51:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.636 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:05.636 13:51:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.636 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:05.636 "name": "raid_bdev1", 00:31:05.636 "uuid": "caa754fa-f118-43ee-9f52-34317c666595", 00:31:05.636 "strip_size_kb": 64, 00:31:05.636 "state": "online", 00:31:05.636 "raid_level": "raid5f", 00:31:05.636 "superblock": false, 00:31:05.636 "num_base_bdevs": 4, 00:31:05.636 "num_base_bdevs_discovered": 3, 00:31:05.636 "num_base_bdevs_operational": 3, 00:31:05.636 "base_bdevs_list": [ 00:31:05.636 { 00:31:05.636 "name": null, 00:31:05.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:05.636 "is_configured": false, 00:31:05.637 "data_offset": 0, 00:31:05.637 "data_size": 65536 00:31:05.637 }, 00:31:05.637 { 00:31:05.637 "name": "BaseBdev2", 00:31:05.637 "uuid": "53c27802-2acc-57d7-a825-f6bf1b6e4c3f", 00:31:05.637 "is_configured": true, 00:31:05.637 "data_offset": 0, 00:31:05.637 "data_size": 65536 00:31:05.637 }, 00:31:05.637 { 00:31:05.637 "name": "BaseBdev3", 00:31:05.637 "uuid": "2694c93b-a7d8-5364-ac9d-8b6e6a692add", 00:31:05.637 "is_configured": true, 00:31:05.637 "data_offset": 0, 00:31:05.637 "data_size": 65536 00:31:05.637 }, 00:31:05.637 { 00:31:05.637 "name": "BaseBdev4", 00:31:05.637 "uuid": "a7dfbc78-d687-5720-89f0-3af332e9981f", 00:31:05.637 "is_configured": true, 00:31:05.637 "data_offset": 0, 00:31:05.637 "data_size": 65536 00:31:05.637 } 00:31:05.637 ] 00:31:05.637 }' 00:31:05.637 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:05.637 13:51:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.204 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:06.204 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:06.204 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:06.204 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:06.204 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:06.204 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:06.204 13:51:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.204 13:51:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.204 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:06.204 13:51:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.204 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:06.204 "name": "raid_bdev1", 00:31:06.204 "uuid": "caa754fa-f118-43ee-9f52-34317c666595", 00:31:06.204 "strip_size_kb": 64, 00:31:06.204 "state": "online", 00:31:06.204 "raid_level": "raid5f", 00:31:06.204 "superblock": false, 00:31:06.204 "num_base_bdevs": 4, 00:31:06.204 "num_base_bdevs_discovered": 3, 00:31:06.204 "num_base_bdevs_operational": 3, 00:31:06.204 "base_bdevs_list": [ 00:31:06.204 { 00:31:06.204 "name": null, 00:31:06.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:06.204 "is_configured": false, 00:31:06.204 "data_offset": 0, 00:31:06.204 "data_size": 65536 00:31:06.204 }, 00:31:06.204 { 00:31:06.204 "name": "BaseBdev2", 00:31:06.204 "uuid": "53c27802-2acc-57d7-a825-f6bf1b6e4c3f", 00:31:06.204 "is_configured": true, 00:31:06.204 "data_offset": 0, 00:31:06.204 "data_size": 65536 00:31:06.204 }, 00:31:06.204 { 00:31:06.204 "name": "BaseBdev3", 00:31:06.204 "uuid": "2694c93b-a7d8-5364-ac9d-8b6e6a692add", 00:31:06.204 "is_configured": true, 00:31:06.204 "data_offset": 0, 00:31:06.204 "data_size": 65536 00:31:06.204 }, 00:31:06.204 { 00:31:06.204 "name": "BaseBdev4", 00:31:06.204 "uuid": "a7dfbc78-d687-5720-89f0-3af332e9981f", 00:31:06.204 "is_configured": true, 00:31:06.204 "data_offset": 0, 00:31:06.204 "data_size": 65536 00:31:06.204 } 00:31:06.204 ] 00:31:06.204 }' 00:31:06.204 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:06.204 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:06.204 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:06.204 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:06.204 13:51:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:06.204 13:51:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.204 13:51:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:06.204 [2024-11-20 13:51:09.004465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:06.204 [2024-11-20 13:51:09.017886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:31:06.204 13:51:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.204 13:51:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:31:06.204 [2024-11-20 13:51:09.026227] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:07.142 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:07.142 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:07.142 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:07.142 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:07.142 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:07.142 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:07.142 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:07.142 13:51:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.142 13:51:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.142 13:51:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.401 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:07.401 "name": "raid_bdev1", 00:31:07.401 "uuid": "caa754fa-f118-43ee-9f52-34317c666595", 00:31:07.401 "strip_size_kb": 64, 00:31:07.401 "state": "online", 00:31:07.401 "raid_level": "raid5f", 00:31:07.401 "superblock": false, 00:31:07.401 "num_base_bdevs": 4, 00:31:07.401 "num_base_bdevs_discovered": 4, 00:31:07.401 "num_base_bdevs_operational": 4, 00:31:07.401 "process": { 00:31:07.401 "type": "rebuild", 00:31:07.401 "target": "spare", 00:31:07.401 "progress": { 00:31:07.401 "blocks": 17280, 00:31:07.401 "percent": 8 00:31:07.401 } 00:31:07.401 }, 00:31:07.402 "base_bdevs_list": [ 00:31:07.402 { 00:31:07.402 "name": "spare", 00:31:07.402 "uuid": "421e389f-29af-57d9-b543-4bac270de85c", 00:31:07.402 "is_configured": true, 00:31:07.402 "data_offset": 0, 00:31:07.402 "data_size": 65536 00:31:07.402 }, 00:31:07.402 { 00:31:07.402 "name": "BaseBdev2", 00:31:07.402 "uuid": "53c27802-2acc-57d7-a825-f6bf1b6e4c3f", 00:31:07.402 "is_configured": true, 00:31:07.402 "data_offset": 0, 00:31:07.402 "data_size": 65536 00:31:07.402 }, 00:31:07.402 { 00:31:07.402 "name": "BaseBdev3", 00:31:07.402 "uuid": "2694c93b-a7d8-5364-ac9d-8b6e6a692add", 00:31:07.402 "is_configured": true, 00:31:07.402 "data_offset": 0, 00:31:07.402 "data_size": 65536 00:31:07.402 }, 00:31:07.402 { 00:31:07.402 "name": "BaseBdev4", 00:31:07.402 "uuid": "a7dfbc78-d687-5720-89f0-3af332e9981f", 00:31:07.402 "is_configured": true, 00:31:07.402 "data_offset": 0, 00:31:07.402 "data_size": 65536 00:31:07.402 } 00:31:07.402 ] 00:31:07.402 }' 00:31:07.402 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:07.402 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:07.402 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:07.402 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:07.402 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:31:07.402 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:31:07.402 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:31:07.402 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=681 00:31:07.402 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:07.402 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:07.402 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:07.402 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:07.402 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:07.402 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:07.402 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:07.402 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:07.402 13:51:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.402 13:51:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.402 13:51:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.402 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:07.402 "name": "raid_bdev1", 00:31:07.402 "uuid": "caa754fa-f118-43ee-9f52-34317c666595", 00:31:07.402 "strip_size_kb": 64, 00:31:07.402 "state": "online", 00:31:07.402 "raid_level": "raid5f", 00:31:07.402 "superblock": false, 00:31:07.402 "num_base_bdevs": 4, 00:31:07.402 "num_base_bdevs_discovered": 4, 00:31:07.402 "num_base_bdevs_operational": 4, 00:31:07.402 "process": { 00:31:07.402 "type": "rebuild", 00:31:07.402 "target": "spare", 00:31:07.402 "progress": { 00:31:07.402 "blocks": 21120, 00:31:07.402 "percent": 10 00:31:07.402 } 00:31:07.402 }, 00:31:07.402 "base_bdevs_list": [ 00:31:07.402 { 00:31:07.402 "name": "spare", 00:31:07.402 "uuid": "421e389f-29af-57d9-b543-4bac270de85c", 00:31:07.402 "is_configured": true, 00:31:07.402 "data_offset": 0, 00:31:07.402 "data_size": 65536 00:31:07.402 }, 00:31:07.402 { 00:31:07.402 "name": "BaseBdev2", 00:31:07.402 "uuid": "53c27802-2acc-57d7-a825-f6bf1b6e4c3f", 00:31:07.402 "is_configured": true, 00:31:07.402 "data_offset": 0, 00:31:07.402 "data_size": 65536 00:31:07.402 }, 00:31:07.402 { 00:31:07.402 "name": "BaseBdev3", 00:31:07.402 "uuid": "2694c93b-a7d8-5364-ac9d-8b6e6a692add", 00:31:07.402 "is_configured": true, 00:31:07.402 "data_offset": 0, 00:31:07.402 "data_size": 65536 00:31:07.402 }, 00:31:07.402 { 00:31:07.402 "name": "BaseBdev4", 00:31:07.402 "uuid": "a7dfbc78-d687-5720-89f0-3af332e9981f", 00:31:07.402 "is_configured": true, 00:31:07.402 "data_offset": 0, 00:31:07.402 "data_size": 65536 00:31:07.402 } 00:31:07.402 ] 00:31:07.402 }' 00:31:07.402 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:07.402 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:07.402 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:07.661 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:07.661 13:51:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:08.730 13:51:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:08.730 13:51:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:08.730 13:51:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:08.730 13:51:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:08.730 13:51:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:08.730 13:51:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:08.730 13:51:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:08.730 13:51:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:08.730 13:51:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.730 13:51:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:08.730 13:51:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.730 13:51:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:08.730 "name": "raid_bdev1", 00:31:08.730 "uuid": "caa754fa-f118-43ee-9f52-34317c666595", 00:31:08.730 "strip_size_kb": 64, 00:31:08.730 "state": "online", 00:31:08.730 "raid_level": "raid5f", 00:31:08.730 "superblock": false, 00:31:08.730 "num_base_bdevs": 4, 00:31:08.730 "num_base_bdevs_discovered": 4, 00:31:08.730 "num_base_bdevs_operational": 4, 00:31:08.730 "process": { 00:31:08.730 "type": "rebuild", 00:31:08.730 "target": "spare", 00:31:08.730 "progress": { 00:31:08.730 "blocks": 44160, 00:31:08.730 "percent": 22 00:31:08.730 } 00:31:08.730 }, 00:31:08.730 "base_bdevs_list": [ 00:31:08.730 { 00:31:08.730 "name": "spare", 00:31:08.730 "uuid": "421e389f-29af-57d9-b543-4bac270de85c", 00:31:08.730 "is_configured": true, 00:31:08.730 "data_offset": 0, 00:31:08.730 "data_size": 65536 00:31:08.730 }, 00:31:08.730 { 00:31:08.730 "name": "BaseBdev2", 00:31:08.730 "uuid": "53c27802-2acc-57d7-a825-f6bf1b6e4c3f", 00:31:08.730 "is_configured": true, 00:31:08.730 "data_offset": 0, 00:31:08.730 "data_size": 65536 00:31:08.730 }, 00:31:08.730 { 00:31:08.730 "name": "BaseBdev3", 00:31:08.730 "uuid": "2694c93b-a7d8-5364-ac9d-8b6e6a692add", 00:31:08.730 "is_configured": true, 00:31:08.730 "data_offset": 0, 00:31:08.730 "data_size": 65536 00:31:08.730 }, 00:31:08.730 { 00:31:08.730 "name": "BaseBdev4", 00:31:08.730 "uuid": "a7dfbc78-d687-5720-89f0-3af332e9981f", 00:31:08.730 "is_configured": true, 00:31:08.730 "data_offset": 0, 00:31:08.730 "data_size": 65536 00:31:08.730 } 00:31:08.730 ] 00:31:08.730 }' 00:31:08.730 13:51:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:08.730 13:51:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:08.730 13:51:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:08.730 13:51:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:08.730 13:51:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:09.667 13:51:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:09.667 13:51:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:09.667 13:51:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:09.667 13:51:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:09.667 13:51:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:09.667 13:51:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:09.667 13:51:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:09.667 13:51:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.667 13:51:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:09.667 13:51:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:09.667 13:51:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.667 13:51:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:09.667 "name": "raid_bdev1", 00:31:09.667 "uuid": "caa754fa-f118-43ee-9f52-34317c666595", 00:31:09.667 "strip_size_kb": 64, 00:31:09.667 "state": "online", 00:31:09.667 "raid_level": "raid5f", 00:31:09.667 "superblock": false, 00:31:09.667 "num_base_bdevs": 4, 00:31:09.667 "num_base_bdevs_discovered": 4, 00:31:09.667 "num_base_bdevs_operational": 4, 00:31:09.667 "process": { 00:31:09.667 "type": "rebuild", 00:31:09.667 "target": "spare", 00:31:09.667 "progress": { 00:31:09.667 "blocks": 65280, 00:31:09.667 "percent": 33 00:31:09.667 } 00:31:09.667 }, 00:31:09.667 "base_bdevs_list": [ 00:31:09.667 { 00:31:09.667 "name": "spare", 00:31:09.667 "uuid": "421e389f-29af-57d9-b543-4bac270de85c", 00:31:09.667 "is_configured": true, 00:31:09.667 "data_offset": 0, 00:31:09.667 "data_size": 65536 00:31:09.667 }, 00:31:09.667 { 00:31:09.667 "name": "BaseBdev2", 00:31:09.667 "uuid": "53c27802-2acc-57d7-a825-f6bf1b6e4c3f", 00:31:09.667 "is_configured": true, 00:31:09.667 "data_offset": 0, 00:31:09.667 "data_size": 65536 00:31:09.667 }, 00:31:09.667 { 00:31:09.667 "name": "BaseBdev3", 00:31:09.667 "uuid": "2694c93b-a7d8-5364-ac9d-8b6e6a692add", 00:31:09.667 "is_configured": true, 00:31:09.667 "data_offset": 0, 00:31:09.667 "data_size": 65536 00:31:09.667 }, 00:31:09.667 { 00:31:09.667 "name": "BaseBdev4", 00:31:09.667 "uuid": "a7dfbc78-d687-5720-89f0-3af332e9981f", 00:31:09.667 "is_configured": true, 00:31:09.667 "data_offset": 0, 00:31:09.667 "data_size": 65536 00:31:09.667 } 00:31:09.667 ] 00:31:09.667 }' 00:31:09.667 13:51:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:09.926 13:51:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:09.926 13:51:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:09.926 13:51:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:09.926 13:51:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:10.862 13:51:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:10.862 13:51:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:10.862 13:51:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:10.862 13:51:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:10.862 13:51:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:10.862 13:51:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:10.862 13:51:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:10.862 13:51:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:10.862 13:51:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.862 13:51:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.862 13:51:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.862 13:51:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:10.862 "name": "raid_bdev1", 00:31:10.862 "uuid": "caa754fa-f118-43ee-9f52-34317c666595", 00:31:10.862 "strip_size_kb": 64, 00:31:10.862 "state": "online", 00:31:10.862 "raid_level": "raid5f", 00:31:10.862 "superblock": false, 00:31:10.862 "num_base_bdevs": 4, 00:31:10.862 "num_base_bdevs_discovered": 4, 00:31:10.862 "num_base_bdevs_operational": 4, 00:31:10.862 "process": { 00:31:10.862 "type": "rebuild", 00:31:10.862 "target": "spare", 00:31:10.862 "progress": { 00:31:10.862 "blocks": 88320, 00:31:10.862 "percent": 44 00:31:10.862 } 00:31:10.862 }, 00:31:10.862 "base_bdevs_list": [ 00:31:10.862 { 00:31:10.862 "name": "spare", 00:31:10.862 "uuid": "421e389f-29af-57d9-b543-4bac270de85c", 00:31:10.862 "is_configured": true, 00:31:10.862 "data_offset": 0, 00:31:10.862 "data_size": 65536 00:31:10.862 }, 00:31:10.862 { 00:31:10.862 "name": "BaseBdev2", 00:31:10.862 "uuid": "53c27802-2acc-57d7-a825-f6bf1b6e4c3f", 00:31:10.862 "is_configured": true, 00:31:10.862 "data_offset": 0, 00:31:10.862 "data_size": 65536 00:31:10.862 }, 00:31:10.862 { 00:31:10.862 "name": "BaseBdev3", 00:31:10.862 "uuid": "2694c93b-a7d8-5364-ac9d-8b6e6a692add", 00:31:10.862 "is_configured": true, 00:31:10.862 "data_offset": 0, 00:31:10.862 "data_size": 65536 00:31:10.862 }, 00:31:10.862 { 00:31:10.862 "name": "BaseBdev4", 00:31:10.862 "uuid": "a7dfbc78-d687-5720-89f0-3af332e9981f", 00:31:10.862 "is_configured": true, 00:31:10.862 "data_offset": 0, 00:31:10.863 "data_size": 65536 00:31:10.863 } 00:31:10.863 ] 00:31:10.863 }' 00:31:10.863 13:51:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:11.121 13:51:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:11.121 13:51:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:11.121 13:51:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:11.121 13:51:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:12.057 13:51:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:12.057 13:51:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:12.057 13:51:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:12.057 13:51:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:12.057 13:51:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:12.057 13:51:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:12.057 13:51:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:12.057 13:51:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.058 13:51:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:12.058 13:51:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:12.058 13:51:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.058 13:51:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:12.058 "name": "raid_bdev1", 00:31:12.058 "uuid": "caa754fa-f118-43ee-9f52-34317c666595", 00:31:12.058 "strip_size_kb": 64, 00:31:12.058 "state": "online", 00:31:12.058 "raid_level": "raid5f", 00:31:12.058 "superblock": false, 00:31:12.058 "num_base_bdevs": 4, 00:31:12.058 "num_base_bdevs_discovered": 4, 00:31:12.058 "num_base_bdevs_operational": 4, 00:31:12.058 "process": { 00:31:12.058 "type": "rebuild", 00:31:12.058 "target": "spare", 00:31:12.058 "progress": { 00:31:12.058 "blocks": 109440, 00:31:12.058 "percent": 55 00:31:12.058 } 00:31:12.058 }, 00:31:12.058 "base_bdevs_list": [ 00:31:12.058 { 00:31:12.058 "name": "spare", 00:31:12.058 "uuid": "421e389f-29af-57d9-b543-4bac270de85c", 00:31:12.058 "is_configured": true, 00:31:12.058 "data_offset": 0, 00:31:12.058 "data_size": 65536 00:31:12.058 }, 00:31:12.058 { 00:31:12.058 "name": "BaseBdev2", 00:31:12.058 "uuid": "53c27802-2acc-57d7-a825-f6bf1b6e4c3f", 00:31:12.058 "is_configured": true, 00:31:12.058 "data_offset": 0, 00:31:12.058 "data_size": 65536 00:31:12.058 }, 00:31:12.058 { 00:31:12.058 "name": "BaseBdev3", 00:31:12.058 "uuid": "2694c93b-a7d8-5364-ac9d-8b6e6a692add", 00:31:12.058 "is_configured": true, 00:31:12.058 "data_offset": 0, 00:31:12.058 "data_size": 65536 00:31:12.058 }, 00:31:12.058 { 00:31:12.058 "name": "BaseBdev4", 00:31:12.058 "uuid": "a7dfbc78-d687-5720-89f0-3af332e9981f", 00:31:12.058 "is_configured": true, 00:31:12.058 "data_offset": 0, 00:31:12.058 "data_size": 65536 00:31:12.058 } 00:31:12.058 ] 00:31:12.058 }' 00:31:12.058 13:51:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:12.058 13:51:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:12.058 13:51:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:12.316 13:51:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:12.316 13:51:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:13.254 13:51:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:13.254 13:51:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:13.254 13:51:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:13.254 13:51:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:13.254 13:51:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:13.254 13:51:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:13.254 13:51:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:13.254 13:51:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.254 13:51:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.254 13:51:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:13.254 13:51:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.254 13:51:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:13.254 "name": "raid_bdev1", 00:31:13.254 "uuid": "caa754fa-f118-43ee-9f52-34317c666595", 00:31:13.254 "strip_size_kb": 64, 00:31:13.254 "state": "online", 00:31:13.254 "raid_level": "raid5f", 00:31:13.254 "superblock": false, 00:31:13.254 "num_base_bdevs": 4, 00:31:13.254 "num_base_bdevs_discovered": 4, 00:31:13.254 "num_base_bdevs_operational": 4, 00:31:13.254 "process": { 00:31:13.254 "type": "rebuild", 00:31:13.254 "target": "spare", 00:31:13.254 "progress": { 00:31:13.254 "blocks": 132480, 00:31:13.254 "percent": 67 00:31:13.254 } 00:31:13.254 }, 00:31:13.254 "base_bdevs_list": [ 00:31:13.254 { 00:31:13.254 "name": "spare", 00:31:13.254 "uuid": "421e389f-29af-57d9-b543-4bac270de85c", 00:31:13.254 "is_configured": true, 00:31:13.254 "data_offset": 0, 00:31:13.254 "data_size": 65536 00:31:13.254 }, 00:31:13.254 { 00:31:13.254 "name": "BaseBdev2", 00:31:13.254 "uuid": "53c27802-2acc-57d7-a825-f6bf1b6e4c3f", 00:31:13.254 "is_configured": true, 00:31:13.254 "data_offset": 0, 00:31:13.254 "data_size": 65536 00:31:13.254 }, 00:31:13.254 { 00:31:13.254 "name": "BaseBdev3", 00:31:13.254 "uuid": "2694c93b-a7d8-5364-ac9d-8b6e6a692add", 00:31:13.254 "is_configured": true, 00:31:13.254 "data_offset": 0, 00:31:13.254 "data_size": 65536 00:31:13.254 }, 00:31:13.254 { 00:31:13.254 "name": "BaseBdev4", 00:31:13.254 "uuid": "a7dfbc78-d687-5720-89f0-3af332e9981f", 00:31:13.254 "is_configured": true, 00:31:13.254 "data_offset": 0, 00:31:13.254 "data_size": 65536 00:31:13.254 } 00:31:13.254 ] 00:31:13.254 }' 00:31:13.254 13:51:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:13.254 13:51:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:13.254 13:51:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:13.513 13:51:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:13.513 13:51:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:14.483 13:51:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:14.483 13:51:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:14.483 13:51:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:14.483 13:51:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:14.483 13:51:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:14.483 13:51:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:14.483 13:51:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:14.483 13:51:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.483 13:51:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.483 13:51:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:14.483 13:51:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.483 13:51:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:14.483 "name": "raid_bdev1", 00:31:14.483 "uuid": "caa754fa-f118-43ee-9f52-34317c666595", 00:31:14.483 "strip_size_kb": 64, 00:31:14.483 "state": "online", 00:31:14.483 "raid_level": "raid5f", 00:31:14.483 "superblock": false, 00:31:14.483 "num_base_bdevs": 4, 00:31:14.483 "num_base_bdevs_discovered": 4, 00:31:14.483 "num_base_bdevs_operational": 4, 00:31:14.483 "process": { 00:31:14.483 "type": "rebuild", 00:31:14.483 "target": "spare", 00:31:14.483 "progress": { 00:31:14.483 "blocks": 153600, 00:31:14.483 "percent": 78 00:31:14.483 } 00:31:14.483 }, 00:31:14.483 "base_bdevs_list": [ 00:31:14.483 { 00:31:14.483 "name": "spare", 00:31:14.483 "uuid": "421e389f-29af-57d9-b543-4bac270de85c", 00:31:14.483 "is_configured": true, 00:31:14.483 "data_offset": 0, 00:31:14.483 "data_size": 65536 00:31:14.483 }, 00:31:14.483 { 00:31:14.483 "name": "BaseBdev2", 00:31:14.483 "uuid": "53c27802-2acc-57d7-a825-f6bf1b6e4c3f", 00:31:14.483 "is_configured": true, 00:31:14.483 "data_offset": 0, 00:31:14.483 "data_size": 65536 00:31:14.483 }, 00:31:14.483 { 00:31:14.483 "name": "BaseBdev3", 00:31:14.483 "uuid": "2694c93b-a7d8-5364-ac9d-8b6e6a692add", 00:31:14.483 "is_configured": true, 00:31:14.483 "data_offset": 0, 00:31:14.483 "data_size": 65536 00:31:14.483 }, 00:31:14.483 { 00:31:14.483 "name": "BaseBdev4", 00:31:14.483 "uuid": "a7dfbc78-d687-5720-89f0-3af332e9981f", 00:31:14.483 "is_configured": true, 00:31:14.483 "data_offset": 0, 00:31:14.483 "data_size": 65536 00:31:14.483 } 00:31:14.483 ] 00:31:14.483 }' 00:31:14.483 13:51:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:14.483 13:51:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:14.483 13:51:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:14.483 13:51:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:14.483 13:51:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:15.857 13:51:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:15.857 13:51:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:15.857 13:51:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:15.857 13:51:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:15.857 13:51:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:15.857 13:51:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:15.857 13:51:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:15.857 13:51:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:15.857 13:51:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.857 13:51:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.857 13:51:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.857 13:51:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:15.857 "name": "raid_bdev1", 00:31:15.857 "uuid": "caa754fa-f118-43ee-9f52-34317c666595", 00:31:15.857 "strip_size_kb": 64, 00:31:15.857 "state": "online", 00:31:15.857 "raid_level": "raid5f", 00:31:15.857 "superblock": false, 00:31:15.857 "num_base_bdevs": 4, 00:31:15.857 "num_base_bdevs_discovered": 4, 00:31:15.857 "num_base_bdevs_operational": 4, 00:31:15.857 "process": { 00:31:15.857 "type": "rebuild", 00:31:15.857 "target": "spare", 00:31:15.857 "progress": { 00:31:15.857 "blocks": 176640, 00:31:15.857 "percent": 89 00:31:15.857 } 00:31:15.857 }, 00:31:15.857 "base_bdevs_list": [ 00:31:15.857 { 00:31:15.857 "name": "spare", 00:31:15.857 "uuid": "421e389f-29af-57d9-b543-4bac270de85c", 00:31:15.857 "is_configured": true, 00:31:15.857 "data_offset": 0, 00:31:15.857 "data_size": 65536 00:31:15.857 }, 00:31:15.857 { 00:31:15.857 "name": "BaseBdev2", 00:31:15.857 "uuid": "53c27802-2acc-57d7-a825-f6bf1b6e4c3f", 00:31:15.857 "is_configured": true, 00:31:15.857 "data_offset": 0, 00:31:15.857 "data_size": 65536 00:31:15.857 }, 00:31:15.857 { 00:31:15.857 "name": "BaseBdev3", 00:31:15.857 "uuid": "2694c93b-a7d8-5364-ac9d-8b6e6a692add", 00:31:15.857 "is_configured": true, 00:31:15.857 "data_offset": 0, 00:31:15.857 "data_size": 65536 00:31:15.857 }, 00:31:15.857 { 00:31:15.857 "name": "BaseBdev4", 00:31:15.857 "uuid": "a7dfbc78-d687-5720-89f0-3af332e9981f", 00:31:15.857 "is_configured": true, 00:31:15.857 "data_offset": 0, 00:31:15.857 "data_size": 65536 00:31:15.857 } 00:31:15.857 ] 00:31:15.857 }' 00:31:15.857 13:51:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:15.857 13:51:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:15.857 13:51:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:15.857 13:51:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:15.857 13:51:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:16.813 [2024-11-20 13:51:19.433579] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:16.813 [2024-11-20 13:51:19.433682] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:16.813 [2024-11-20 13:51:19.433742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:16.813 "name": "raid_bdev1", 00:31:16.813 "uuid": "caa754fa-f118-43ee-9f52-34317c666595", 00:31:16.813 "strip_size_kb": 64, 00:31:16.813 "state": "online", 00:31:16.813 "raid_level": "raid5f", 00:31:16.813 "superblock": false, 00:31:16.813 "num_base_bdevs": 4, 00:31:16.813 "num_base_bdevs_discovered": 4, 00:31:16.813 "num_base_bdevs_operational": 4, 00:31:16.813 "base_bdevs_list": [ 00:31:16.813 { 00:31:16.813 "name": "spare", 00:31:16.813 "uuid": "421e389f-29af-57d9-b543-4bac270de85c", 00:31:16.813 "is_configured": true, 00:31:16.813 "data_offset": 0, 00:31:16.813 "data_size": 65536 00:31:16.813 }, 00:31:16.813 { 00:31:16.813 "name": "BaseBdev2", 00:31:16.813 "uuid": "53c27802-2acc-57d7-a825-f6bf1b6e4c3f", 00:31:16.813 "is_configured": true, 00:31:16.813 "data_offset": 0, 00:31:16.813 "data_size": 65536 00:31:16.813 }, 00:31:16.813 { 00:31:16.813 "name": "BaseBdev3", 00:31:16.813 "uuid": "2694c93b-a7d8-5364-ac9d-8b6e6a692add", 00:31:16.813 "is_configured": true, 00:31:16.813 "data_offset": 0, 00:31:16.813 "data_size": 65536 00:31:16.813 }, 00:31:16.813 { 00:31:16.813 "name": "BaseBdev4", 00:31:16.813 "uuid": "a7dfbc78-d687-5720-89f0-3af332e9981f", 00:31:16.813 "is_configured": true, 00:31:16.813 "data_offset": 0, 00:31:16.813 "data_size": 65536 00:31:16.813 } 00:31:16.813 ] 00:31:16.813 }' 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:16.813 13:51:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.072 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:17.072 "name": "raid_bdev1", 00:31:17.072 "uuid": "caa754fa-f118-43ee-9f52-34317c666595", 00:31:17.072 "strip_size_kb": 64, 00:31:17.072 "state": "online", 00:31:17.072 "raid_level": "raid5f", 00:31:17.072 "superblock": false, 00:31:17.072 "num_base_bdevs": 4, 00:31:17.072 "num_base_bdevs_discovered": 4, 00:31:17.072 "num_base_bdevs_operational": 4, 00:31:17.072 "base_bdevs_list": [ 00:31:17.072 { 00:31:17.072 "name": "spare", 00:31:17.072 "uuid": "421e389f-29af-57d9-b543-4bac270de85c", 00:31:17.072 "is_configured": true, 00:31:17.072 "data_offset": 0, 00:31:17.072 "data_size": 65536 00:31:17.072 }, 00:31:17.072 { 00:31:17.072 "name": "BaseBdev2", 00:31:17.072 "uuid": "53c27802-2acc-57d7-a825-f6bf1b6e4c3f", 00:31:17.072 "is_configured": true, 00:31:17.072 "data_offset": 0, 00:31:17.072 "data_size": 65536 00:31:17.072 }, 00:31:17.072 { 00:31:17.072 "name": "BaseBdev3", 00:31:17.072 "uuid": "2694c93b-a7d8-5364-ac9d-8b6e6a692add", 00:31:17.072 "is_configured": true, 00:31:17.072 "data_offset": 0, 00:31:17.072 "data_size": 65536 00:31:17.072 }, 00:31:17.072 { 00:31:17.072 "name": "BaseBdev4", 00:31:17.072 "uuid": "a7dfbc78-d687-5720-89f0-3af332e9981f", 00:31:17.072 "is_configured": true, 00:31:17.072 "data_offset": 0, 00:31:17.072 "data_size": 65536 00:31:17.072 } 00:31:17.072 ] 00:31:17.072 }' 00:31:17.072 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:17.072 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:17.072 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:17.072 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:17.072 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:31:17.072 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:17.072 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:17.072 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:17.072 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:17.072 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:17.072 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:17.072 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:17.072 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:17.072 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:17.072 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:17.072 13:51:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.072 13:51:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.072 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:17.072 13:51:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.072 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:17.072 "name": "raid_bdev1", 00:31:17.072 "uuid": "caa754fa-f118-43ee-9f52-34317c666595", 00:31:17.072 "strip_size_kb": 64, 00:31:17.072 "state": "online", 00:31:17.072 "raid_level": "raid5f", 00:31:17.072 "superblock": false, 00:31:17.072 "num_base_bdevs": 4, 00:31:17.072 "num_base_bdevs_discovered": 4, 00:31:17.072 "num_base_bdevs_operational": 4, 00:31:17.072 "base_bdevs_list": [ 00:31:17.072 { 00:31:17.072 "name": "spare", 00:31:17.072 "uuid": "421e389f-29af-57d9-b543-4bac270de85c", 00:31:17.072 "is_configured": true, 00:31:17.072 "data_offset": 0, 00:31:17.072 "data_size": 65536 00:31:17.072 }, 00:31:17.072 { 00:31:17.072 "name": "BaseBdev2", 00:31:17.072 "uuid": "53c27802-2acc-57d7-a825-f6bf1b6e4c3f", 00:31:17.072 "is_configured": true, 00:31:17.072 "data_offset": 0, 00:31:17.072 "data_size": 65536 00:31:17.073 }, 00:31:17.073 { 00:31:17.073 "name": "BaseBdev3", 00:31:17.073 "uuid": "2694c93b-a7d8-5364-ac9d-8b6e6a692add", 00:31:17.073 "is_configured": true, 00:31:17.073 "data_offset": 0, 00:31:17.073 "data_size": 65536 00:31:17.073 }, 00:31:17.073 { 00:31:17.073 "name": "BaseBdev4", 00:31:17.073 "uuid": "a7dfbc78-d687-5720-89f0-3af332e9981f", 00:31:17.073 "is_configured": true, 00:31:17.073 "data_offset": 0, 00:31:17.073 "data_size": 65536 00:31:17.073 } 00:31:17.073 ] 00:31:17.073 }' 00:31:17.073 13:51:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:17.073 13:51:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.639 13:51:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:17.639 13:51:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.639 13:51:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.639 [2024-11-20 13:51:20.403069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:17.639 [2024-11-20 13:51:20.403123] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:17.639 [2024-11-20 13:51:20.403240] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:17.639 [2024-11-20 13:51:20.403380] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:17.639 [2024-11-20 13:51:20.403397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:31:17.639 13:51:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.639 13:51:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:17.639 13:51:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.639 13:51:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.639 13:51:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:31:17.639 13:51:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.639 13:51:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:31:17.639 13:51:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:31:17.639 13:51:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:31:17.639 13:51:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:17.639 13:51:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:17.639 13:51:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:17.639 13:51:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:17.639 13:51:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:17.639 13:51:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:17.639 13:51:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:31:17.639 13:51:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:17.639 13:51:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:17.639 13:51:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:17.898 /dev/nbd0 00:31:17.898 13:51:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:17.898 13:51:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:17.898 13:51:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:17.898 13:51:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:31:17.898 13:51:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:17.898 13:51:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:17.898 13:51:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:17.898 13:51:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:31:17.898 13:51:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:17.898 13:51:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:17.898 13:51:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:17.898 1+0 records in 00:31:17.898 1+0 records out 00:31:17.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323829 s, 12.6 MB/s 00:31:17.898 13:51:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:17.898 13:51:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:31:17.898 13:51:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:18.156 13:51:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:18.156 13:51:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:31:18.156 13:51:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:18.156 13:51:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:18.156 13:51:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:31:18.415 /dev/nbd1 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:18.415 1+0 records in 00:31:18.415 1+0 records out 00:31:18.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411269 s, 10.0 MB/s 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:18.415 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:31:18.982 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:18.982 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:18.982 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:18.982 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:18.982 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:18.982 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:18.982 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:18.982 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:18.982 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:18.982 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:31:18.982 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:19.241 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:19.241 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:19.241 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:19.241 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:19.241 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:19.241 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:19.241 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:19.241 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:31:19.241 13:51:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85257 00:31:19.241 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85257 ']' 00:31:19.241 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85257 00:31:19.241 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:31:19.241 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:19.241 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85257 00:31:19.241 killing process with pid 85257 00:31:19.241 Received shutdown signal, test time was about 60.000000 seconds 00:31:19.241 00:31:19.241 Latency(us) 00:31:19.241 [2024-11-20T13:51:22.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:19.241 [2024-11-20T13:51:22.158Z] =================================================================================================================== 00:31:19.241 [2024-11-20T13:51:22.158Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:19.241 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:19.241 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:19.241 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85257' 00:31:19.241 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85257 00:31:19.241 [2024-11-20 13:51:21.942839] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:19.241 13:51:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85257 00:31:19.499 [2024-11-20 13:51:22.358246] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:20.469 13:51:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:31:20.469 00:31:20.469 real 0m20.184s 00:31:20.469 user 0m25.104s 00:31:20.469 sys 0m2.360s 00:31:20.469 13:51:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:20.469 13:51:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.469 ************************************ 00:31:20.469 END TEST raid5f_rebuild_test 00:31:20.469 ************************************ 00:31:20.728 13:51:23 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:31:20.728 13:51:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:31:20.728 13:51:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:20.728 13:51:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:20.728 ************************************ 00:31:20.728 START TEST raid5f_rebuild_test_sb 00:31:20.728 ************************************ 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85763 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85763 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85763 ']' 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:20.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:20.728 13:51:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:20.728 [2024-11-20 13:51:23.527334] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:31:20.728 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:20.728 Zero copy mechanism will not be used. 00:31:20.728 [2024-11-20 13:51:23.527543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85763 ] 00:31:20.987 [2024-11-20 13:51:23.713641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.987 [2024-11-20 13:51:23.843241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.245 [2024-11-20 13:51:24.030926] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:21.245 [2024-11-20 13:51:24.030975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.812 BaseBdev1_malloc 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.812 [2024-11-20 13:51:24.531685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:21.812 [2024-11-20 13:51:24.531759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:21.812 [2024-11-20 13:51:24.531790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:21.812 [2024-11-20 13:51:24.531807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:21.812 [2024-11-20 13:51:24.534640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:21.812 [2024-11-20 13:51:24.534679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:21.812 BaseBdev1 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.812 BaseBdev2_malloc 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.812 [2024-11-20 13:51:24.585056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:21.812 [2024-11-20 13:51:24.585129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:21.812 [2024-11-20 13:51:24.585158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:31:21.812 [2024-11-20 13:51:24.585174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:21.812 [2024-11-20 13:51:24.587579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:21.812 [2024-11-20 13:51:24.587619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:21.812 BaseBdev2 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.812 BaseBdev3_malloc 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.812 [2024-11-20 13:51:24.646095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:31:21.812 [2024-11-20 13:51:24.646156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:21.812 [2024-11-20 13:51:24.646186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:21.812 [2024-11-20 13:51:24.646203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:21.812 [2024-11-20 13:51:24.648712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:21.812 [2024-11-20 13:51:24.648757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:21.812 BaseBdev3 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.812 BaseBdev4_malloc 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:21.812 [2024-11-20 13:51:24.698841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:31:21.812 [2024-11-20 13:51:24.698949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:21.812 [2024-11-20 13:51:24.698981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:31:21.812 [2024-11-20 13:51:24.698998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:21.812 [2024-11-20 13:51:24.701600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:21.812 [2024-11-20 13:51:24.701646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:31:21.812 BaseBdev4 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.812 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.071 spare_malloc 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.071 spare_delay 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.071 [2024-11-20 13:51:24.763965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:22.071 [2024-11-20 13:51:24.764036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:22.071 [2024-11-20 13:51:24.764066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:31:22.071 [2024-11-20 13:51:24.764083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:22.071 [2024-11-20 13:51:24.766858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:22.071 [2024-11-20 13:51:24.766918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:22.071 spare 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.071 [2024-11-20 13:51:24.776038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:22.071 [2024-11-20 13:51:24.778560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:22.071 [2024-11-20 13:51:24.778642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:22.071 [2024-11-20 13:51:24.778732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:22.071 [2024-11-20 13:51:24.779021] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:22.071 [2024-11-20 13:51:24.779053] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:31:22.071 [2024-11-20 13:51:24.779405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:31:22.071 [2024-11-20 13:51:24.786065] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:22.071 [2024-11-20 13:51:24.786111] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:31:22.071 [2024-11-20 13:51:24.786379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:22.071 "name": "raid_bdev1", 00:31:22.071 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:22.071 "strip_size_kb": 64, 00:31:22.071 "state": "online", 00:31:22.071 "raid_level": "raid5f", 00:31:22.071 "superblock": true, 00:31:22.071 "num_base_bdevs": 4, 00:31:22.071 "num_base_bdevs_discovered": 4, 00:31:22.071 "num_base_bdevs_operational": 4, 00:31:22.071 "base_bdevs_list": [ 00:31:22.071 { 00:31:22.071 "name": "BaseBdev1", 00:31:22.071 "uuid": "5cabf107-0531-5bce-846f-f80f499b8ef1", 00:31:22.071 "is_configured": true, 00:31:22.071 "data_offset": 2048, 00:31:22.071 "data_size": 63488 00:31:22.071 }, 00:31:22.071 { 00:31:22.071 "name": "BaseBdev2", 00:31:22.071 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:22.071 "is_configured": true, 00:31:22.071 "data_offset": 2048, 00:31:22.071 "data_size": 63488 00:31:22.071 }, 00:31:22.071 { 00:31:22.071 "name": "BaseBdev3", 00:31:22.071 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:22.071 "is_configured": true, 00:31:22.071 "data_offset": 2048, 00:31:22.071 "data_size": 63488 00:31:22.071 }, 00:31:22.071 { 00:31:22.071 "name": "BaseBdev4", 00:31:22.071 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:22.071 "is_configured": true, 00:31:22.071 "data_offset": 2048, 00:31:22.071 "data_size": 63488 00:31:22.071 } 00:31:22.071 ] 00:31:22.071 }' 00:31:22.071 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:22.072 13:51:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.637 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:22.637 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.638 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.638 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:31:22.638 [2024-11-20 13:51:25.337767] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:22.638 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.638 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:31:22.638 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:22.638 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.638 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:22.638 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:22.638 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.638 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:31:22.638 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:31:22.638 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:31:22.638 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:31:22.638 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:31:22.638 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:22.638 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:22.638 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:22.638 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:22.638 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:22.638 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:31:22.638 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:22.638 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:22.638 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:22.896 [2024-11-20 13:51:25.693692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:31:22.896 /dev/nbd0 00:31:22.896 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:22.896 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:22.896 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:22.896 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:31:22.896 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:22.896 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:22.896 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:22.896 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:31:22.896 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:22.896 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:22.896 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:22.896 1+0 records in 00:31:22.896 1+0 records out 00:31:22.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362139 s, 11.3 MB/s 00:31:22.896 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:22.896 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:31:22.896 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:22.896 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:22.896 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:31:22.896 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:22.896 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:22.896 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:31:22.896 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:31:22.896 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:31:22.896 13:51:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:31:23.831 496+0 records in 00:31:23.831 496+0 records out 00:31:23.831 97517568 bytes (98 MB, 93 MiB) copied, 0.638948 s, 153 MB/s 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:31:23.831 [2024-11-20 13:51:26.696546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:23.831 [2024-11-20 13:51:26.732310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.831 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:24.090 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.090 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:24.090 "name": "raid_bdev1", 00:31:24.090 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:24.090 "strip_size_kb": 64, 00:31:24.090 "state": "online", 00:31:24.090 "raid_level": "raid5f", 00:31:24.090 "superblock": true, 00:31:24.090 "num_base_bdevs": 4, 00:31:24.090 "num_base_bdevs_discovered": 3, 00:31:24.090 "num_base_bdevs_operational": 3, 00:31:24.090 "base_bdevs_list": [ 00:31:24.090 { 00:31:24.090 "name": null, 00:31:24.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:24.090 "is_configured": false, 00:31:24.090 "data_offset": 0, 00:31:24.090 "data_size": 63488 00:31:24.090 }, 00:31:24.090 { 00:31:24.090 "name": "BaseBdev2", 00:31:24.090 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:24.090 "is_configured": true, 00:31:24.090 "data_offset": 2048, 00:31:24.090 "data_size": 63488 00:31:24.090 }, 00:31:24.090 { 00:31:24.090 "name": "BaseBdev3", 00:31:24.090 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:24.090 "is_configured": true, 00:31:24.090 "data_offset": 2048, 00:31:24.090 "data_size": 63488 00:31:24.090 }, 00:31:24.090 { 00:31:24.090 "name": "BaseBdev4", 00:31:24.090 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:24.090 "is_configured": true, 00:31:24.090 "data_offset": 2048, 00:31:24.090 "data_size": 63488 00:31:24.090 } 00:31:24.090 ] 00:31:24.090 }' 00:31:24.090 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:24.090 13:51:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:24.669 13:51:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:24.669 13:51:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.669 13:51:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:24.670 [2024-11-20 13:51:27.276488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:24.670 [2024-11-20 13:51:27.290511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:31:24.670 13:51:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.670 13:51:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:31:24.670 [2024-11-20 13:51:27.299275] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:25.611 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:25.611 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:25.611 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:25.611 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:25.611 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:25.611 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:25.611 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:25.611 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.611 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:25.611 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.611 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:25.611 "name": "raid_bdev1", 00:31:25.611 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:25.611 "strip_size_kb": 64, 00:31:25.611 "state": "online", 00:31:25.611 "raid_level": "raid5f", 00:31:25.611 "superblock": true, 00:31:25.611 "num_base_bdevs": 4, 00:31:25.611 "num_base_bdevs_discovered": 4, 00:31:25.611 "num_base_bdevs_operational": 4, 00:31:25.611 "process": { 00:31:25.611 "type": "rebuild", 00:31:25.611 "target": "spare", 00:31:25.611 "progress": { 00:31:25.611 "blocks": 17280, 00:31:25.611 "percent": 9 00:31:25.611 } 00:31:25.611 }, 00:31:25.611 "base_bdevs_list": [ 00:31:25.611 { 00:31:25.611 "name": "spare", 00:31:25.611 "uuid": "fa5f57f6-85f6-5093-a935-0a7a7ea6be2c", 00:31:25.611 "is_configured": true, 00:31:25.611 "data_offset": 2048, 00:31:25.611 "data_size": 63488 00:31:25.611 }, 00:31:25.611 { 00:31:25.611 "name": "BaseBdev2", 00:31:25.611 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:25.611 "is_configured": true, 00:31:25.611 "data_offset": 2048, 00:31:25.611 "data_size": 63488 00:31:25.611 }, 00:31:25.611 { 00:31:25.611 "name": "BaseBdev3", 00:31:25.611 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:25.611 "is_configured": true, 00:31:25.611 "data_offset": 2048, 00:31:25.611 "data_size": 63488 00:31:25.611 }, 00:31:25.611 { 00:31:25.611 "name": "BaseBdev4", 00:31:25.611 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:25.611 "is_configured": true, 00:31:25.611 "data_offset": 2048, 00:31:25.611 "data_size": 63488 00:31:25.611 } 00:31:25.611 ] 00:31:25.611 }' 00:31:25.611 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:25.611 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:25.611 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:25.611 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:25.611 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:31:25.611 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.611 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:25.611 [2024-11-20 13:51:28.456999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:25.611 [2024-11-20 13:51:28.511792] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:25.611 [2024-11-20 13:51:28.511910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:25.611 [2024-11-20 13:51:28.511939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:25.611 [2024-11-20 13:51:28.511954] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:25.870 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.870 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:25.870 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:25.870 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:25.870 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:25.870 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:25.870 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:25.870 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:25.870 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:25.870 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:25.870 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:25.870 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:25.870 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.870 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:25.870 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:25.870 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.870 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:25.870 "name": "raid_bdev1", 00:31:25.870 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:25.870 "strip_size_kb": 64, 00:31:25.870 "state": "online", 00:31:25.870 "raid_level": "raid5f", 00:31:25.870 "superblock": true, 00:31:25.870 "num_base_bdevs": 4, 00:31:25.870 "num_base_bdevs_discovered": 3, 00:31:25.870 "num_base_bdevs_operational": 3, 00:31:25.870 "base_bdevs_list": [ 00:31:25.870 { 00:31:25.870 "name": null, 00:31:25.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:25.871 "is_configured": false, 00:31:25.871 "data_offset": 0, 00:31:25.871 "data_size": 63488 00:31:25.871 }, 00:31:25.871 { 00:31:25.871 "name": "BaseBdev2", 00:31:25.871 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:25.871 "is_configured": true, 00:31:25.871 "data_offset": 2048, 00:31:25.871 "data_size": 63488 00:31:25.871 }, 00:31:25.871 { 00:31:25.871 "name": "BaseBdev3", 00:31:25.871 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:25.871 "is_configured": true, 00:31:25.871 "data_offset": 2048, 00:31:25.871 "data_size": 63488 00:31:25.871 }, 00:31:25.871 { 00:31:25.871 "name": "BaseBdev4", 00:31:25.871 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:25.871 "is_configured": true, 00:31:25.871 "data_offset": 2048, 00:31:25.871 "data_size": 63488 00:31:25.871 } 00:31:25.871 ] 00:31:25.871 }' 00:31:25.871 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:25.871 13:51:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:26.439 13:51:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:26.439 13:51:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:26.439 13:51:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:26.439 13:51:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:26.439 13:51:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:26.439 13:51:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:26.439 13:51:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:26.439 13:51:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.439 13:51:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:26.439 13:51:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.439 13:51:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:26.439 "name": "raid_bdev1", 00:31:26.439 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:26.439 "strip_size_kb": 64, 00:31:26.439 "state": "online", 00:31:26.439 "raid_level": "raid5f", 00:31:26.439 "superblock": true, 00:31:26.439 "num_base_bdevs": 4, 00:31:26.439 "num_base_bdevs_discovered": 3, 00:31:26.439 "num_base_bdevs_operational": 3, 00:31:26.439 "base_bdevs_list": [ 00:31:26.439 { 00:31:26.439 "name": null, 00:31:26.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:26.439 "is_configured": false, 00:31:26.439 "data_offset": 0, 00:31:26.439 "data_size": 63488 00:31:26.439 }, 00:31:26.439 { 00:31:26.439 "name": "BaseBdev2", 00:31:26.439 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:26.439 "is_configured": true, 00:31:26.439 "data_offset": 2048, 00:31:26.439 "data_size": 63488 00:31:26.439 }, 00:31:26.439 { 00:31:26.439 "name": "BaseBdev3", 00:31:26.439 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:26.439 "is_configured": true, 00:31:26.439 "data_offset": 2048, 00:31:26.439 "data_size": 63488 00:31:26.439 }, 00:31:26.439 { 00:31:26.439 "name": "BaseBdev4", 00:31:26.439 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:26.439 "is_configured": true, 00:31:26.439 "data_offset": 2048, 00:31:26.439 "data_size": 63488 00:31:26.439 } 00:31:26.439 ] 00:31:26.439 }' 00:31:26.439 13:51:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:26.439 13:51:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:26.439 13:51:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:26.439 13:51:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:26.439 13:51:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:26.439 13:51:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.439 13:51:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:26.439 [2024-11-20 13:51:29.208715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:26.439 [2024-11-20 13:51:29.222265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:31:26.439 13:51:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.439 13:51:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:31:26.439 [2024-11-20 13:51:29.231032] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:27.376 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:27.376 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:27.376 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:27.376 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:27.376 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:27.376 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:27.376 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:27.376 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.376 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:27.376 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.376 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:27.376 "name": "raid_bdev1", 00:31:27.376 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:27.376 "strip_size_kb": 64, 00:31:27.376 "state": "online", 00:31:27.376 "raid_level": "raid5f", 00:31:27.376 "superblock": true, 00:31:27.376 "num_base_bdevs": 4, 00:31:27.376 "num_base_bdevs_discovered": 4, 00:31:27.376 "num_base_bdevs_operational": 4, 00:31:27.376 "process": { 00:31:27.376 "type": "rebuild", 00:31:27.376 "target": "spare", 00:31:27.376 "progress": { 00:31:27.376 "blocks": 17280, 00:31:27.376 "percent": 9 00:31:27.376 } 00:31:27.376 }, 00:31:27.376 "base_bdevs_list": [ 00:31:27.376 { 00:31:27.376 "name": "spare", 00:31:27.376 "uuid": "fa5f57f6-85f6-5093-a935-0a7a7ea6be2c", 00:31:27.376 "is_configured": true, 00:31:27.376 "data_offset": 2048, 00:31:27.376 "data_size": 63488 00:31:27.376 }, 00:31:27.376 { 00:31:27.376 "name": "BaseBdev2", 00:31:27.376 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:27.376 "is_configured": true, 00:31:27.376 "data_offset": 2048, 00:31:27.376 "data_size": 63488 00:31:27.376 }, 00:31:27.376 { 00:31:27.376 "name": "BaseBdev3", 00:31:27.376 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:27.376 "is_configured": true, 00:31:27.376 "data_offset": 2048, 00:31:27.376 "data_size": 63488 00:31:27.376 }, 00:31:27.376 { 00:31:27.376 "name": "BaseBdev4", 00:31:27.376 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:27.376 "is_configured": true, 00:31:27.376 "data_offset": 2048, 00:31:27.376 "data_size": 63488 00:31:27.376 } 00:31:27.376 ] 00:31:27.376 }' 00:31:27.376 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:27.635 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:27.635 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:27.635 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:27.635 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:31:27.635 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:31:27.635 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:31:27.635 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:31:27.635 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:31:27.635 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=701 00:31:27.635 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:27.635 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:27.635 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:27.635 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:27.635 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:27.635 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:27.635 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:27.635 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.635 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:27.635 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:27.635 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.635 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:27.635 "name": "raid_bdev1", 00:31:27.635 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:27.635 "strip_size_kb": 64, 00:31:27.635 "state": "online", 00:31:27.635 "raid_level": "raid5f", 00:31:27.635 "superblock": true, 00:31:27.635 "num_base_bdevs": 4, 00:31:27.635 "num_base_bdevs_discovered": 4, 00:31:27.635 "num_base_bdevs_operational": 4, 00:31:27.635 "process": { 00:31:27.635 "type": "rebuild", 00:31:27.635 "target": "spare", 00:31:27.635 "progress": { 00:31:27.635 "blocks": 21120, 00:31:27.635 "percent": 11 00:31:27.635 } 00:31:27.635 }, 00:31:27.635 "base_bdevs_list": [ 00:31:27.635 { 00:31:27.635 "name": "spare", 00:31:27.635 "uuid": "fa5f57f6-85f6-5093-a935-0a7a7ea6be2c", 00:31:27.635 "is_configured": true, 00:31:27.635 "data_offset": 2048, 00:31:27.635 "data_size": 63488 00:31:27.635 }, 00:31:27.635 { 00:31:27.635 "name": "BaseBdev2", 00:31:27.635 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:27.635 "is_configured": true, 00:31:27.635 "data_offset": 2048, 00:31:27.635 "data_size": 63488 00:31:27.635 }, 00:31:27.635 { 00:31:27.635 "name": "BaseBdev3", 00:31:27.635 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:27.635 "is_configured": true, 00:31:27.635 "data_offset": 2048, 00:31:27.635 "data_size": 63488 00:31:27.635 }, 00:31:27.635 { 00:31:27.635 "name": "BaseBdev4", 00:31:27.635 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:27.635 "is_configured": true, 00:31:27.635 "data_offset": 2048, 00:31:27.635 "data_size": 63488 00:31:27.635 } 00:31:27.635 ] 00:31:27.635 }' 00:31:27.635 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:27.635 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:27.635 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:27.894 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:27.894 13:51:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:28.831 13:51:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:28.831 13:51:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:28.831 13:51:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:28.831 13:51:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:28.831 13:51:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:28.831 13:51:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:28.831 13:51:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:28.831 13:51:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.831 13:51:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:28.831 13:51:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:28.831 13:51:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.831 13:51:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:28.831 "name": "raid_bdev1", 00:31:28.831 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:28.831 "strip_size_kb": 64, 00:31:28.831 "state": "online", 00:31:28.831 "raid_level": "raid5f", 00:31:28.831 "superblock": true, 00:31:28.831 "num_base_bdevs": 4, 00:31:28.831 "num_base_bdevs_discovered": 4, 00:31:28.831 "num_base_bdevs_operational": 4, 00:31:28.831 "process": { 00:31:28.831 "type": "rebuild", 00:31:28.831 "target": "spare", 00:31:28.831 "progress": { 00:31:28.831 "blocks": 44160, 00:31:28.831 "percent": 23 00:31:28.831 } 00:31:28.831 }, 00:31:28.831 "base_bdevs_list": [ 00:31:28.831 { 00:31:28.831 "name": "spare", 00:31:28.831 "uuid": "fa5f57f6-85f6-5093-a935-0a7a7ea6be2c", 00:31:28.831 "is_configured": true, 00:31:28.831 "data_offset": 2048, 00:31:28.831 "data_size": 63488 00:31:28.831 }, 00:31:28.831 { 00:31:28.831 "name": "BaseBdev2", 00:31:28.831 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:28.831 "is_configured": true, 00:31:28.831 "data_offset": 2048, 00:31:28.831 "data_size": 63488 00:31:28.831 }, 00:31:28.831 { 00:31:28.831 "name": "BaseBdev3", 00:31:28.831 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:28.831 "is_configured": true, 00:31:28.831 "data_offset": 2048, 00:31:28.831 "data_size": 63488 00:31:28.831 }, 00:31:28.831 { 00:31:28.831 "name": "BaseBdev4", 00:31:28.831 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:28.831 "is_configured": true, 00:31:28.831 "data_offset": 2048, 00:31:28.831 "data_size": 63488 00:31:28.831 } 00:31:28.831 ] 00:31:28.831 }' 00:31:28.831 13:51:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:28.831 13:51:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:28.831 13:51:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:29.090 13:51:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:29.090 13:51:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:30.026 13:51:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:30.026 13:51:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:30.026 13:51:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:30.026 13:51:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:30.026 13:51:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:30.026 13:51:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:30.026 13:51:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:30.026 13:51:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:30.026 13:51:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.026 13:51:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:30.026 13:51:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.026 13:51:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:30.026 "name": "raid_bdev1", 00:31:30.026 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:30.026 "strip_size_kb": 64, 00:31:30.026 "state": "online", 00:31:30.026 "raid_level": "raid5f", 00:31:30.026 "superblock": true, 00:31:30.026 "num_base_bdevs": 4, 00:31:30.026 "num_base_bdevs_discovered": 4, 00:31:30.026 "num_base_bdevs_operational": 4, 00:31:30.026 "process": { 00:31:30.026 "type": "rebuild", 00:31:30.026 "target": "spare", 00:31:30.026 "progress": { 00:31:30.026 "blocks": 67200, 00:31:30.026 "percent": 35 00:31:30.026 } 00:31:30.026 }, 00:31:30.026 "base_bdevs_list": [ 00:31:30.026 { 00:31:30.026 "name": "spare", 00:31:30.026 "uuid": "fa5f57f6-85f6-5093-a935-0a7a7ea6be2c", 00:31:30.026 "is_configured": true, 00:31:30.026 "data_offset": 2048, 00:31:30.026 "data_size": 63488 00:31:30.026 }, 00:31:30.026 { 00:31:30.026 "name": "BaseBdev2", 00:31:30.026 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:30.026 "is_configured": true, 00:31:30.026 "data_offset": 2048, 00:31:30.026 "data_size": 63488 00:31:30.026 }, 00:31:30.026 { 00:31:30.026 "name": "BaseBdev3", 00:31:30.026 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:30.026 "is_configured": true, 00:31:30.026 "data_offset": 2048, 00:31:30.026 "data_size": 63488 00:31:30.026 }, 00:31:30.026 { 00:31:30.026 "name": "BaseBdev4", 00:31:30.026 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:30.026 "is_configured": true, 00:31:30.026 "data_offset": 2048, 00:31:30.026 "data_size": 63488 00:31:30.026 } 00:31:30.026 ] 00:31:30.026 }' 00:31:30.026 13:51:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:30.026 13:51:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:30.026 13:51:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:30.026 13:51:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:30.026 13:51:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:31.399 13:51:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:31.399 13:51:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:31.399 13:51:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:31.399 13:51:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:31.399 13:51:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:31.399 13:51:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:31.399 13:51:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:31.399 13:51:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:31.399 13:51:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.399 13:51:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.399 13:51:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.399 13:51:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:31.399 "name": "raid_bdev1", 00:31:31.399 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:31.399 "strip_size_kb": 64, 00:31:31.399 "state": "online", 00:31:31.399 "raid_level": "raid5f", 00:31:31.399 "superblock": true, 00:31:31.399 "num_base_bdevs": 4, 00:31:31.399 "num_base_bdevs_discovered": 4, 00:31:31.399 "num_base_bdevs_operational": 4, 00:31:31.399 "process": { 00:31:31.399 "type": "rebuild", 00:31:31.399 "target": "spare", 00:31:31.399 "progress": { 00:31:31.399 "blocks": 88320, 00:31:31.399 "percent": 46 00:31:31.399 } 00:31:31.399 }, 00:31:31.399 "base_bdevs_list": [ 00:31:31.399 { 00:31:31.399 "name": "spare", 00:31:31.399 "uuid": "fa5f57f6-85f6-5093-a935-0a7a7ea6be2c", 00:31:31.399 "is_configured": true, 00:31:31.399 "data_offset": 2048, 00:31:31.399 "data_size": 63488 00:31:31.399 }, 00:31:31.399 { 00:31:31.399 "name": "BaseBdev2", 00:31:31.399 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:31.399 "is_configured": true, 00:31:31.399 "data_offset": 2048, 00:31:31.399 "data_size": 63488 00:31:31.399 }, 00:31:31.399 { 00:31:31.399 "name": "BaseBdev3", 00:31:31.400 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:31.400 "is_configured": true, 00:31:31.400 "data_offset": 2048, 00:31:31.400 "data_size": 63488 00:31:31.400 }, 00:31:31.400 { 00:31:31.400 "name": "BaseBdev4", 00:31:31.400 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:31.400 "is_configured": true, 00:31:31.400 "data_offset": 2048, 00:31:31.400 "data_size": 63488 00:31:31.400 } 00:31:31.400 ] 00:31:31.400 }' 00:31:31.400 13:51:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:31.400 13:51:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:31.400 13:51:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:31.400 13:51:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:31.400 13:51:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:32.332 13:51:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:32.332 13:51:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:32.332 13:51:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:32.332 13:51:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:32.332 13:51:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:32.332 13:51:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:32.332 13:51:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:32.332 13:51:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.332 13:51:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.332 13:51:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:32.332 13:51:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.332 13:51:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:32.332 "name": "raid_bdev1", 00:31:32.332 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:32.332 "strip_size_kb": 64, 00:31:32.332 "state": "online", 00:31:32.332 "raid_level": "raid5f", 00:31:32.332 "superblock": true, 00:31:32.332 "num_base_bdevs": 4, 00:31:32.332 "num_base_bdevs_discovered": 4, 00:31:32.332 "num_base_bdevs_operational": 4, 00:31:32.332 "process": { 00:31:32.332 "type": "rebuild", 00:31:32.332 "target": "spare", 00:31:32.332 "progress": { 00:31:32.332 "blocks": 111360, 00:31:32.332 "percent": 58 00:31:32.332 } 00:31:32.332 }, 00:31:32.332 "base_bdevs_list": [ 00:31:32.332 { 00:31:32.332 "name": "spare", 00:31:32.332 "uuid": "fa5f57f6-85f6-5093-a935-0a7a7ea6be2c", 00:31:32.332 "is_configured": true, 00:31:32.332 "data_offset": 2048, 00:31:32.332 "data_size": 63488 00:31:32.332 }, 00:31:32.332 { 00:31:32.332 "name": "BaseBdev2", 00:31:32.332 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:32.332 "is_configured": true, 00:31:32.332 "data_offset": 2048, 00:31:32.332 "data_size": 63488 00:31:32.332 }, 00:31:32.332 { 00:31:32.332 "name": "BaseBdev3", 00:31:32.332 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:32.332 "is_configured": true, 00:31:32.332 "data_offset": 2048, 00:31:32.332 "data_size": 63488 00:31:32.332 }, 00:31:32.332 { 00:31:32.332 "name": "BaseBdev4", 00:31:32.332 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:32.332 "is_configured": true, 00:31:32.332 "data_offset": 2048, 00:31:32.332 "data_size": 63488 00:31:32.332 } 00:31:32.332 ] 00:31:32.332 }' 00:31:32.332 13:51:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:32.332 13:51:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:32.332 13:51:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:32.590 13:51:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:32.591 13:51:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:33.524 13:51:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:33.524 13:51:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:33.524 13:51:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:33.524 13:51:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:33.524 13:51:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:33.524 13:51:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:33.524 13:51:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:33.524 13:51:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.524 13:51:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:33.524 13:51:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:33.524 13:51:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.524 13:51:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:33.524 "name": "raid_bdev1", 00:31:33.524 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:33.524 "strip_size_kb": 64, 00:31:33.524 "state": "online", 00:31:33.524 "raid_level": "raid5f", 00:31:33.524 "superblock": true, 00:31:33.524 "num_base_bdevs": 4, 00:31:33.524 "num_base_bdevs_discovered": 4, 00:31:33.524 "num_base_bdevs_operational": 4, 00:31:33.524 "process": { 00:31:33.524 "type": "rebuild", 00:31:33.524 "target": "spare", 00:31:33.524 "progress": { 00:31:33.524 "blocks": 132480, 00:31:33.524 "percent": 69 00:31:33.524 } 00:31:33.524 }, 00:31:33.524 "base_bdevs_list": [ 00:31:33.524 { 00:31:33.524 "name": "spare", 00:31:33.524 "uuid": "fa5f57f6-85f6-5093-a935-0a7a7ea6be2c", 00:31:33.524 "is_configured": true, 00:31:33.524 "data_offset": 2048, 00:31:33.524 "data_size": 63488 00:31:33.524 }, 00:31:33.524 { 00:31:33.524 "name": "BaseBdev2", 00:31:33.524 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:33.524 "is_configured": true, 00:31:33.524 "data_offset": 2048, 00:31:33.524 "data_size": 63488 00:31:33.524 }, 00:31:33.524 { 00:31:33.524 "name": "BaseBdev3", 00:31:33.524 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:33.524 "is_configured": true, 00:31:33.524 "data_offset": 2048, 00:31:33.524 "data_size": 63488 00:31:33.524 }, 00:31:33.524 { 00:31:33.524 "name": "BaseBdev4", 00:31:33.524 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:33.524 "is_configured": true, 00:31:33.524 "data_offset": 2048, 00:31:33.524 "data_size": 63488 00:31:33.524 } 00:31:33.524 ] 00:31:33.524 }' 00:31:33.524 13:51:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:33.524 13:51:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:33.524 13:51:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:33.783 13:51:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:33.783 13:51:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:34.718 13:51:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:34.718 13:51:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:34.718 13:51:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:34.718 13:51:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:34.718 13:51:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:34.718 13:51:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:34.718 13:51:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:34.718 13:51:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:34.718 13:51:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.718 13:51:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.718 13:51:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.718 13:51:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:34.718 "name": "raid_bdev1", 00:31:34.718 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:34.718 "strip_size_kb": 64, 00:31:34.718 "state": "online", 00:31:34.718 "raid_level": "raid5f", 00:31:34.718 "superblock": true, 00:31:34.718 "num_base_bdevs": 4, 00:31:34.718 "num_base_bdevs_discovered": 4, 00:31:34.718 "num_base_bdevs_operational": 4, 00:31:34.718 "process": { 00:31:34.718 "type": "rebuild", 00:31:34.718 "target": "spare", 00:31:34.718 "progress": { 00:31:34.718 "blocks": 155520, 00:31:34.718 "percent": 81 00:31:34.718 } 00:31:34.718 }, 00:31:34.718 "base_bdevs_list": [ 00:31:34.718 { 00:31:34.718 "name": "spare", 00:31:34.719 "uuid": "fa5f57f6-85f6-5093-a935-0a7a7ea6be2c", 00:31:34.719 "is_configured": true, 00:31:34.719 "data_offset": 2048, 00:31:34.719 "data_size": 63488 00:31:34.719 }, 00:31:34.719 { 00:31:34.719 "name": "BaseBdev2", 00:31:34.719 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:34.719 "is_configured": true, 00:31:34.719 "data_offset": 2048, 00:31:34.719 "data_size": 63488 00:31:34.719 }, 00:31:34.719 { 00:31:34.719 "name": "BaseBdev3", 00:31:34.719 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:34.719 "is_configured": true, 00:31:34.719 "data_offset": 2048, 00:31:34.719 "data_size": 63488 00:31:34.719 }, 00:31:34.719 { 00:31:34.719 "name": "BaseBdev4", 00:31:34.719 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:34.719 "is_configured": true, 00:31:34.719 "data_offset": 2048, 00:31:34.719 "data_size": 63488 00:31:34.719 } 00:31:34.719 ] 00:31:34.719 }' 00:31:34.719 13:51:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:34.719 13:51:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:34.719 13:51:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:34.719 13:51:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:34.719 13:51:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:36.094 13:51:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:36.094 13:51:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:36.094 13:51:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:36.094 13:51:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:36.094 13:51:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:36.094 13:51:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:36.094 13:51:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:36.094 13:51:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:36.094 13:51:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.094 13:51:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.094 13:51:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.094 13:51:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:36.094 "name": "raid_bdev1", 00:31:36.094 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:36.094 "strip_size_kb": 64, 00:31:36.094 "state": "online", 00:31:36.094 "raid_level": "raid5f", 00:31:36.094 "superblock": true, 00:31:36.094 "num_base_bdevs": 4, 00:31:36.094 "num_base_bdevs_discovered": 4, 00:31:36.094 "num_base_bdevs_operational": 4, 00:31:36.094 "process": { 00:31:36.094 "type": "rebuild", 00:31:36.094 "target": "spare", 00:31:36.094 "progress": { 00:31:36.094 "blocks": 176640, 00:31:36.094 "percent": 92 00:31:36.094 } 00:31:36.094 }, 00:31:36.094 "base_bdevs_list": [ 00:31:36.094 { 00:31:36.094 "name": "spare", 00:31:36.094 "uuid": "fa5f57f6-85f6-5093-a935-0a7a7ea6be2c", 00:31:36.094 "is_configured": true, 00:31:36.094 "data_offset": 2048, 00:31:36.094 "data_size": 63488 00:31:36.094 }, 00:31:36.094 { 00:31:36.094 "name": "BaseBdev2", 00:31:36.094 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:36.094 "is_configured": true, 00:31:36.094 "data_offset": 2048, 00:31:36.094 "data_size": 63488 00:31:36.094 }, 00:31:36.094 { 00:31:36.094 "name": "BaseBdev3", 00:31:36.094 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:36.094 "is_configured": true, 00:31:36.094 "data_offset": 2048, 00:31:36.094 "data_size": 63488 00:31:36.094 }, 00:31:36.094 { 00:31:36.094 "name": "BaseBdev4", 00:31:36.094 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:36.094 "is_configured": true, 00:31:36.094 "data_offset": 2048, 00:31:36.094 "data_size": 63488 00:31:36.094 } 00:31:36.094 ] 00:31:36.094 }' 00:31:36.094 13:51:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:36.094 13:51:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:36.094 13:51:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:36.094 13:51:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:36.094 13:51:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:36.659 [2024-11-20 13:51:39.333161] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:36.659 [2024-11-20 13:51:39.333262] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:36.659 [2024-11-20 13:51:39.333510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:36.917 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:36.917 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:36.917 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:36.917 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:36.917 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:36.917 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:36.917 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:36.917 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.917 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.917 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:36.917 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.917 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:36.917 "name": "raid_bdev1", 00:31:36.917 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:36.917 "strip_size_kb": 64, 00:31:36.917 "state": "online", 00:31:36.917 "raid_level": "raid5f", 00:31:36.917 "superblock": true, 00:31:36.917 "num_base_bdevs": 4, 00:31:36.918 "num_base_bdevs_discovered": 4, 00:31:36.918 "num_base_bdevs_operational": 4, 00:31:36.918 "base_bdevs_list": [ 00:31:36.918 { 00:31:36.918 "name": "spare", 00:31:36.918 "uuid": "fa5f57f6-85f6-5093-a935-0a7a7ea6be2c", 00:31:36.918 "is_configured": true, 00:31:36.918 "data_offset": 2048, 00:31:36.918 "data_size": 63488 00:31:36.918 }, 00:31:36.918 { 00:31:36.918 "name": "BaseBdev2", 00:31:36.918 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:36.918 "is_configured": true, 00:31:36.918 "data_offset": 2048, 00:31:36.918 "data_size": 63488 00:31:36.918 }, 00:31:36.918 { 00:31:36.918 "name": "BaseBdev3", 00:31:36.918 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:36.918 "is_configured": true, 00:31:36.918 "data_offset": 2048, 00:31:36.918 "data_size": 63488 00:31:36.918 }, 00:31:36.918 { 00:31:36.918 "name": "BaseBdev4", 00:31:36.918 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:36.918 "is_configured": true, 00:31:36.918 "data_offset": 2048, 00:31:36.918 "data_size": 63488 00:31:36.918 } 00:31:36.918 ] 00:31:36.918 }' 00:31:36.918 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:37.176 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:37.176 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:37.176 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:31:37.176 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:31:37.176 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:37.176 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:37.176 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:37.176 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:37.176 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:37.176 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:37.176 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:37.176 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.176 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.176 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.176 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:37.176 "name": "raid_bdev1", 00:31:37.176 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:37.176 "strip_size_kb": 64, 00:31:37.176 "state": "online", 00:31:37.176 "raid_level": "raid5f", 00:31:37.176 "superblock": true, 00:31:37.176 "num_base_bdevs": 4, 00:31:37.176 "num_base_bdevs_discovered": 4, 00:31:37.176 "num_base_bdevs_operational": 4, 00:31:37.176 "base_bdevs_list": [ 00:31:37.176 { 00:31:37.176 "name": "spare", 00:31:37.176 "uuid": "fa5f57f6-85f6-5093-a935-0a7a7ea6be2c", 00:31:37.176 "is_configured": true, 00:31:37.176 "data_offset": 2048, 00:31:37.176 "data_size": 63488 00:31:37.176 }, 00:31:37.176 { 00:31:37.176 "name": "BaseBdev2", 00:31:37.176 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:37.176 "is_configured": true, 00:31:37.176 "data_offset": 2048, 00:31:37.177 "data_size": 63488 00:31:37.177 }, 00:31:37.177 { 00:31:37.177 "name": "BaseBdev3", 00:31:37.177 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:37.177 "is_configured": true, 00:31:37.177 "data_offset": 2048, 00:31:37.177 "data_size": 63488 00:31:37.177 }, 00:31:37.177 { 00:31:37.177 "name": "BaseBdev4", 00:31:37.177 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:37.177 "is_configured": true, 00:31:37.177 "data_offset": 2048, 00:31:37.177 "data_size": 63488 00:31:37.177 } 00:31:37.177 ] 00:31:37.177 }' 00:31:37.177 13:51:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:37.177 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:37.177 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:37.177 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:37.177 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:31:37.177 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:37.177 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:37.177 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:37.177 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:37.177 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:37.177 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:37.177 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:37.177 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:37.177 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:37.177 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:37.177 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:37.177 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.177 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.435 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.435 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:37.435 "name": "raid_bdev1", 00:31:37.435 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:37.435 "strip_size_kb": 64, 00:31:37.435 "state": "online", 00:31:37.435 "raid_level": "raid5f", 00:31:37.435 "superblock": true, 00:31:37.435 "num_base_bdevs": 4, 00:31:37.435 "num_base_bdevs_discovered": 4, 00:31:37.435 "num_base_bdevs_operational": 4, 00:31:37.435 "base_bdevs_list": [ 00:31:37.435 { 00:31:37.435 "name": "spare", 00:31:37.435 "uuid": "fa5f57f6-85f6-5093-a935-0a7a7ea6be2c", 00:31:37.435 "is_configured": true, 00:31:37.435 "data_offset": 2048, 00:31:37.435 "data_size": 63488 00:31:37.435 }, 00:31:37.435 { 00:31:37.435 "name": "BaseBdev2", 00:31:37.435 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:37.435 "is_configured": true, 00:31:37.435 "data_offset": 2048, 00:31:37.435 "data_size": 63488 00:31:37.435 }, 00:31:37.435 { 00:31:37.435 "name": "BaseBdev3", 00:31:37.435 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:37.435 "is_configured": true, 00:31:37.435 "data_offset": 2048, 00:31:37.435 "data_size": 63488 00:31:37.435 }, 00:31:37.435 { 00:31:37.435 "name": "BaseBdev4", 00:31:37.435 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:37.435 "is_configured": true, 00:31:37.435 "data_offset": 2048, 00:31:37.435 "data_size": 63488 00:31:37.435 } 00:31:37.435 ] 00:31:37.435 }' 00:31:37.435 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:37.435 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.693 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:37.693 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.693 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.693 [2024-11-20 13:51:40.587027] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:37.693 [2024-11-20 13:51:40.587067] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:37.693 [2024-11-20 13:51:40.587205] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:37.693 [2024-11-20 13:51:40.587384] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:37.693 [2024-11-20 13:51:40.587413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:31:37.693 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.693 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:37.693 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.693 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:31:37.693 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.693 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.952 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:31:37.952 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:31:37.952 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:31:37.952 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:37.952 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:37.952 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:37.952 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:37.952 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:37.952 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:37.952 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:31:37.952 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:37.952 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:37.952 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:38.210 /dev/nbd0 00:31:38.210 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:38.210 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:38.210 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:38.210 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:31:38.210 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:38.210 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:38.210 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:38.210 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:31:38.210 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:38.210 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:38.210 13:51:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:38.210 1+0 records in 00:31:38.210 1+0 records out 00:31:38.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253887 s, 16.1 MB/s 00:31:38.210 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:38.210 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:31:38.210 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:38.210 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:38.210 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:31:38.210 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:38.210 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:38.210 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:31:38.468 /dev/nbd1 00:31:38.468 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:38.468 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:38.468 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:31:38.468 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:31:38.468 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:38.468 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:38.468 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:31:38.468 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:31:38.468 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:38.468 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:38.468 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:38.468 1+0 records in 00:31:38.468 1+0 records out 00:31:38.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399988 s, 10.2 MB/s 00:31:38.468 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:38.468 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:31:38.468 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:38.468 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:38.468 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:31:38.468 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:38.468 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:38.468 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:38.727 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:31:38.727 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:38.727 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:38.727 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:38.727 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:31:38.727 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:38.727 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:31:38.985 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:38.985 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:38.985 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:38.985 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:38.985 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:38.985 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:38.985 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:31:38.985 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:31:38.985 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:38.985 13:51:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:39.552 [2024-11-20 13:51:42.194502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:39.552 [2024-11-20 13:51:42.194586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:39.552 [2024-11-20 13:51:42.194620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:31:39.552 [2024-11-20 13:51:42.194634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:39.552 [2024-11-20 13:51:42.197811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:39.552 [2024-11-20 13:51:42.198015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:39.552 [2024-11-20 13:51:42.198275] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:39.552 [2024-11-20 13:51:42.198473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:39.552 [2024-11-20 13:51:42.198729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:39.552 [2024-11-20 13:51:42.198958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:39.552 [2024-11-20 13:51:42.199091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:39.552 spare 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:39.552 [2024-11-20 13:51:42.299230] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:31:39.552 [2024-11-20 13:51:42.299306] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:31:39.552 [2024-11-20 13:51:42.299779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:31:39.552 [2024-11-20 13:51:42.306132] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:31:39.552 [2024-11-20 13:51:42.306161] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:31:39.552 [2024-11-20 13:51:42.306428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.552 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:39.552 "name": "raid_bdev1", 00:31:39.552 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:39.552 "strip_size_kb": 64, 00:31:39.552 "state": "online", 00:31:39.552 "raid_level": "raid5f", 00:31:39.552 "superblock": true, 00:31:39.552 "num_base_bdevs": 4, 00:31:39.552 "num_base_bdevs_discovered": 4, 00:31:39.552 "num_base_bdevs_operational": 4, 00:31:39.552 "base_bdevs_list": [ 00:31:39.552 { 00:31:39.552 "name": "spare", 00:31:39.552 "uuid": "fa5f57f6-85f6-5093-a935-0a7a7ea6be2c", 00:31:39.552 "is_configured": true, 00:31:39.552 "data_offset": 2048, 00:31:39.552 "data_size": 63488 00:31:39.552 }, 00:31:39.552 { 00:31:39.552 "name": "BaseBdev2", 00:31:39.552 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:39.552 "is_configured": true, 00:31:39.552 "data_offset": 2048, 00:31:39.552 "data_size": 63488 00:31:39.552 }, 00:31:39.552 { 00:31:39.552 "name": "BaseBdev3", 00:31:39.552 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:39.552 "is_configured": true, 00:31:39.552 "data_offset": 2048, 00:31:39.553 "data_size": 63488 00:31:39.553 }, 00:31:39.553 { 00:31:39.553 "name": "BaseBdev4", 00:31:39.553 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:39.553 "is_configured": true, 00:31:39.553 "data_offset": 2048, 00:31:39.553 "data_size": 63488 00:31:39.553 } 00:31:39.553 ] 00:31:39.553 }' 00:31:39.553 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:39.553 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.120 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:40.120 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:40.120 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:40.120 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:40.120 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:40.120 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:40.120 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.120 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.120 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:40.120 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.120 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:40.120 "name": "raid_bdev1", 00:31:40.120 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:40.120 "strip_size_kb": 64, 00:31:40.120 "state": "online", 00:31:40.120 "raid_level": "raid5f", 00:31:40.120 "superblock": true, 00:31:40.120 "num_base_bdevs": 4, 00:31:40.120 "num_base_bdevs_discovered": 4, 00:31:40.120 "num_base_bdevs_operational": 4, 00:31:40.120 "base_bdevs_list": [ 00:31:40.120 { 00:31:40.120 "name": "spare", 00:31:40.120 "uuid": "fa5f57f6-85f6-5093-a935-0a7a7ea6be2c", 00:31:40.120 "is_configured": true, 00:31:40.120 "data_offset": 2048, 00:31:40.120 "data_size": 63488 00:31:40.120 }, 00:31:40.120 { 00:31:40.120 "name": "BaseBdev2", 00:31:40.120 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:40.120 "is_configured": true, 00:31:40.120 "data_offset": 2048, 00:31:40.120 "data_size": 63488 00:31:40.120 }, 00:31:40.120 { 00:31:40.120 "name": "BaseBdev3", 00:31:40.120 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:40.120 "is_configured": true, 00:31:40.120 "data_offset": 2048, 00:31:40.120 "data_size": 63488 00:31:40.120 }, 00:31:40.120 { 00:31:40.120 "name": "BaseBdev4", 00:31:40.120 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:40.120 "is_configured": true, 00:31:40.120 "data_offset": 2048, 00:31:40.120 "data_size": 63488 00:31:40.120 } 00:31:40.120 ] 00:31:40.120 }' 00:31:40.120 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:40.120 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:40.120 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:40.120 13:51:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:40.120 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:40.120 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.120 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.120 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:31:40.120 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.380 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:31:40.380 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:31:40.380 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.380 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.380 [2024-11-20 13:51:43.058020] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:40.380 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.380 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:40.380 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:40.380 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:40.380 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:40.380 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:40.380 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:40.380 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:40.380 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:40.380 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:40.380 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:40.380 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:40.380 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:40.380 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.380 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.380 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.380 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:40.380 "name": "raid_bdev1", 00:31:40.380 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:40.380 "strip_size_kb": 64, 00:31:40.380 "state": "online", 00:31:40.380 "raid_level": "raid5f", 00:31:40.380 "superblock": true, 00:31:40.380 "num_base_bdevs": 4, 00:31:40.380 "num_base_bdevs_discovered": 3, 00:31:40.380 "num_base_bdevs_operational": 3, 00:31:40.380 "base_bdevs_list": [ 00:31:40.380 { 00:31:40.380 "name": null, 00:31:40.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:40.380 "is_configured": false, 00:31:40.380 "data_offset": 0, 00:31:40.380 "data_size": 63488 00:31:40.380 }, 00:31:40.380 { 00:31:40.380 "name": "BaseBdev2", 00:31:40.380 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:40.380 "is_configured": true, 00:31:40.380 "data_offset": 2048, 00:31:40.380 "data_size": 63488 00:31:40.380 }, 00:31:40.380 { 00:31:40.380 "name": "BaseBdev3", 00:31:40.380 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:40.380 "is_configured": true, 00:31:40.380 "data_offset": 2048, 00:31:40.380 "data_size": 63488 00:31:40.380 }, 00:31:40.380 { 00:31:40.380 "name": "BaseBdev4", 00:31:40.380 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:40.380 "is_configured": true, 00:31:40.380 "data_offset": 2048, 00:31:40.380 "data_size": 63488 00:31:40.380 } 00:31:40.380 ] 00:31:40.380 }' 00:31:40.380 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:40.380 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.947 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:40.947 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.947 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.947 [2024-11-20 13:51:43.590186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:40.947 [2024-11-20 13:51:43.590420] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:40.947 [2024-11-20 13:51:43.590458] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:40.947 [2024-11-20 13:51:43.590510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:40.947 [2024-11-20 13:51:43.603944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:31:40.947 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.947 13:51:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:31:40.947 [2024-11-20 13:51:43.612907] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:41.882 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:41.882 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:41.882 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:41.882 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:41.882 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:41.882 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:41.882 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:41.882 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.882 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:41.882 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.882 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:41.882 "name": "raid_bdev1", 00:31:41.882 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:41.882 "strip_size_kb": 64, 00:31:41.882 "state": "online", 00:31:41.882 "raid_level": "raid5f", 00:31:41.882 "superblock": true, 00:31:41.882 "num_base_bdevs": 4, 00:31:41.882 "num_base_bdevs_discovered": 4, 00:31:41.882 "num_base_bdevs_operational": 4, 00:31:41.882 "process": { 00:31:41.882 "type": "rebuild", 00:31:41.882 "target": "spare", 00:31:41.882 "progress": { 00:31:41.882 "blocks": 17280, 00:31:41.882 "percent": 9 00:31:41.882 } 00:31:41.882 }, 00:31:41.882 "base_bdevs_list": [ 00:31:41.882 { 00:31:41.882 "name": "spare", 00:31:41.882 "uuid": "fa5f57f6-85f6-5093-a935-0a7a7ea6be2c", 00:31:41.882 "is_configured": true, 00:31:41.882 "data_offset": 2048, 00:31:41.882 "data_size": 63488 00:31:41.882 }, 00:31:41.882 { 00:31:41.882 "name": "BaseBdev2", 00:31:41.882 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:41.882 "is_configured": true, 00:31:41.882 "data_offset": 2048, 00:31:41.882 "data_size": 63488 00:31:41.882 }, 00:31:41.882 { 00:31:41.882 "name": "BaseBdev3", 00:31:41.882 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:41.882 "is_configured": true, 00:31:41.882 "data_offset": 2048, 00:31:41.882 "data_size": 63488 00:31:41.882 }, 00:31:41.882 { 00:31:41.882 "name": "BaseBdev4", 00:31:41.882 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:41.882 "is_configured": true, 00:31:41.882 "data_offset": 2048, 00:31:41.882 "data_size": 63488 00:31:41.882 } 00:31:41.882 ] 00:31:41.882 }' 00:31:41.882 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:41.882 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:41.882 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:41.882 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:41.882 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:31:41.882 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.882 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:41.882 [2024-11-20 13:51:44.778347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:42.142 [2024-11-20 13:51:44.826273] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:42.142 [2024-11-20 13:51:44.826379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:42.142 [2024-11-20 13:51:44.826406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:42.142 [2024-11-20 13:51:44.826424] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:42.142 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.142 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:42.142 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:42.142 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:42.142 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:42.142 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:42.142 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:42.142 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:42.142 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:42.142 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:42.142 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:42.142 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:42.142 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:42.142 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.142 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:42.142 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.142 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:42.142 "name": "raid_bdev1", 00:31:42.142 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:42.142 "strip_size_kb": 64, 00:31:42.142 "state": "online", 00:31:42.142 "raid_level": "raid5f", 00:31:42.142 "superblock": true, 00:31:42.142 "num_base_bdevs": 4, 00:31:42.142 "num_base_bdevs_discovered": 3, 00:31:42.142 "num_base_bdevs_operational": 3, 00:31:42.142 "base_bdevs_list": [ 00:31:42.142 { 00:31:42.142 "name": null, 00:31:42.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.142 "is_configured": false, 00:31:42.142 "data_offset": 0, 00:31:42.142 "data_size": 63488 00:31:42.142 }, 00:31:42.142 { 00:31:42.142 "name": "BaseBdev2", 00:31:42.142 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:42.142 "is_configured": true, 00:31:42.142 "data_offset": 2048, 00:31:42.142 "data_size": 63488 00:31:42.142 }, 00:31:42.142 { 00:31:42.142 "name": "BaseBdev3", 00:31:42.142 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:42.142 "is_configured": true, 00:31:42.142 "data_offset": 2048, 00:31:42.142 "data_size": 63488 00:31:42.142 }, 00:31:42.142 { 00:31:42.142 "name": "BaseBdev4", 00:31:42.142 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:42.142 "is_configured": true, 00:31:42.142 "data_offset": 2048, 00:31:42.142 "data_size": 63488 00:31:42.142 } 00:31:42.142 ] 00:31:42.142 }' 00:31:42.142 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:42.142 13:51:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:42.710 13:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:31:42.710 13:51:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.710 13:51:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:42.710 [2024-11-20 13:51:45.405659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:42.710 [2024-11-20 13:51:45.405745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:42.710 [2024-11-20 13:51:45.405782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:31:42.710 [2024-11-20 13:51:45.405801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:42.710 [2024-11-20 13:51:45.406440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:42.710 [2024-11-20 13:51:45.406490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:42.710 [2024-11-20 13:51:45.406608] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:42.710 [2024-11-20 13:51:45.406633] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:42.710 [2024-11-20 13:51:45.406648] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:42.710 [2024-11-20 13:51:45.406691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:42.710 [2024-11-20 13:51:45.419813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:31:42.710 spare 00:31:42.710 13:51:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.710 13:51:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:31:42.710 [2024-11-20 13:51:45.428468] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:43.646 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:43.646 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:43.646 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:43.646 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:43.646 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:43.646 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:43.646 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:43.646 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.646 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:43.646 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.646 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:43.646 "name": "raid_bdev1", 00:31:43.646 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:43.646 "strip_size_kb": 64, 00:31:43.646 "state": "online", 00:31:43.646 "raid_level": "raid5f", 00:31:43.646 "superblock": true, 00:31:43.646 "num_base_bdevs": 4, 00:31:43.646 "num_base_bdevs_discovered": 4, 00:31:43.646 "num_base_bdevs_operational": 4, 00:31:43.646 "process": { 00:31:43.646 "type": "rebuild", 00:31:43.646 "target": "spare", 00:31:43.646 "progress": { 00:31:43.646 "blocks": 17280, 00:31:43.646 "percent": 9 00:31:43.646 } 00:31:43.646 }, 00:31:43.646 "base_bdevs_list": [ 00:31:43.646 { 00:31:43.646 "name": "spare", 00:31:43.646 "uuid": "fa5f57f6-85f6-5093-a935-0a7a7ea6be2c", 00:31:43.646 "is_configured": true, 00:31:43.646 "data_offset": 2048, 00:31:43.646 "data_size": 63488 00:31:43.646 }, 00:31:43.646 { 00:31:43.646 "name": "BaseBdev2", 00:31:43.646 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:43.646 "is_configured": true, 00:31:43.646 "data_offset": 2048, 00:31:43.646 "data_size": 63488 00:31:43.646 }, 00:31:43.646 { 00:31:43.646 "name": "BaseBdev3", 00:31:43.646 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:43.646 "is_configured": true, 00:31:43.646 "data_offset": 2048, 00:31:43.646 "data_size": 63488 00:31:43.646 }, 00:31:43.646 { 00:31:43.646 "name": "BaseBdev4", 00:31:43.646 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:43.646 "is_configured": true, 00:31:43.646 "data_offset": 2048, 00:31:43.646 "data_size": 63488 00:31:43.646 } 00:31:43.646 ] 00:31:43.646 }' 00:31:43.646 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:43.646 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:43.646 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:43.905 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:43.905 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:31:43.905 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.905 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:43.905 [2024-11-20 13:51:46.590495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:43.905 [2024-11-20 13:51:46.640479] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:43.905 [2024-11-20 13:51:46.640713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:43.905 [2024-11-20 13:51:46.640924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:43.905 [2024-11-20 13:51:46.641063] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:43.905 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.905 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:43.905 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:43.905 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:43.905 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:43.905 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:43.905 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:43.905 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:43.905 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:43.905 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:43.905 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:43.905 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:43.905 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:43.905 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.905 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:43.905 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.905 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:43.905 "name": "raid_bdev1", 00:31:43.905 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:43.905 "strip_size_kb": 64, 00:31:43.905 "state": "online", 00:31:43.905 "raid_level": "raid5f", 00:31:43.905 "superblock": true, 00:31:43.905 "num_base_bdevs": 4, 00:31:43.905 "num_base_bdevs_discovered": 3, 00:31:43.905 "num_base_bdevs_operational": 3, 00:31:43.905 "base_bdevs_list": [ 00:31:43.905 { 00:31:43.905 "name": null, 00:31:43.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:43.905 "is_configured": false, 00:31:43.905 "data_offset": 0, 00:31:43.905 "data_size": 63488 00:31:43.905 }, 00:31:43.905 { 00:31:43.905 "name": "BaseBdev2", 00:31:43.905 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:43.905 "is_configured": true, 00:31:43.905 "data_offset": 2048, 00:31:43.905 "data_size": 63488 00:31:43.905 }, 00:31:43.905 { 00:31:43.905 "name": "BaseBdev3", 00:31:43.905 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:43.905 "is_configured": true, 00:31:43.905 "data_offset": 2048, 00:31:43.905 "data_size": 63488 00:31:43.905 }, 00:31:43.905 { 00:31:43.905 "name": "BaseBdev4", 00:31:43.905 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:43.905 "is_configured": true, 00:31:43.905 "data_offset": 2048, 00:31:43.905 "data_size": 63488 00:31:43.905 } 00:31:43.905 ] 00:31:43.905 }' 00:31:43.905 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:43.905 13:51:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:44.474 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:44.474 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:44.474 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:44.474 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:44.474 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:44.474 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:44.474 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.474 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:44.474 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:44.474 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.474 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:44.474 "name": "raid_bdev1", 00:31:44.474 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:44.474 "strip_size_kb": 64, 00:31:44.474 "state": "online", 00:31:44.474 "raid_level": "raid5f", 00:31:44.474 "superblock": true, 00:31:44.474 "num_base_bdevs": 4, 00:31:44.474 "num_base_bdevs_discovered": 3, 00:31:44.474 "num_base_bdevs_operational": 3, 00:31:44.474 "base_bdevs_list": [ 00:31:44.474 { 00:31:44.474 "name": null, 00:31:44.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:44.474 "is_configured": false, 00:31:44.474 "data_offset": 0, 00:31:44.474 "data_size": 63488 00:31:44.474 }, 00:31:44.474 { 00:31:44.474 "name": "BaseBdev2", 00:31:44.474 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:44.474 "is_configured": true, 00:31:44.475 "data_offset": 2048, 00:31:44.475 "data_size": 63488 00:31:44.475 }, 00:31:44.475 { 00:31:44.475 "name": "BaseBdev3", 00:31:44.475 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:44.475 "is_configured": true, 00:31:44.475 "data_offset": 2048, 00:31:44.475 "data_size": 63488 00:31:44.475 }, 00:31:44.475 { 00:31:44.475 "name": "BaseBdev4", 00:31:44.475 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:44.475 "is_configured": true, 00:31:44.475 "data_offset": 2048, 00:31:44.475 "data_size": 63488 00:31:44.475 } 00:31:44.475 ] 00:31:44.475 }' 00:31:44.475 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:44.475 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:44.475 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:44.475 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:44.475 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:31:44.475 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.475 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:44.475 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.475 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:44.475 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.475 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:44.475 [2024-11-20 13:51:47.319978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:44.475 [2024-11-20 13:51:47.320179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:44.475 [2024-11-20 13:51:47.320225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:31:44.475 [2024-11-20 13:51:47.320242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:44.475 [2024-11-20 13:51:47.320826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:44.475 [2024-11-20 13:51:47.320858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:44.475 [2024-11-20 13:51:47.320977] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:31:44.475 [2024-11-20 13:51:47.321000] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:44.475 [2024-11-20 13:51:47.321018] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:44.475 [2024-11-20 13:51:47.321031] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:31:44.475 BaseBdev1 00:31:44.475 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.475 13:51:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:31:45.852 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:45.852 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:45.852 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:45.852 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:45.852 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:45.852 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:45.852 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:45.852 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:45.852 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:45.852 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:45.852 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:45.852 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:45.852 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.852 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:45.852 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.852 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:45.852 "name": "raid_bdev1", 00:31:45.852 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:45.852 "strip_size_kb": 64, 00:31:45.852 "state": "online", 00:31:45.852 "raid_level": "raid5f", 00:31:45.852 "superblock": true, 00:31:45.852 "num_base_bdevs": 4, 00:31:45.852 "num_base_bdevs_discovered": 3, 00:31:45.852 "num_base_bdevs_operational": 3, 00:31:45.852 "base_bdevs_list": [ 00:31:45.852 { 00:31:45.852 "name": null, 00:31:45.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:45.852 "is_configured": false, 00:31:45.852 "data_offset": 0, 00:31:45.852 "data_size": 63488 00:31:45.852 }, 00:31:45.852 { 00:31:45.852 "name": "BaseBdev2", 00:31:45.852 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:45.852 "is_configured": true, 00:31:45.852 "data_offset": 2048, 00:31:45.852 "data_size": 63488 00:31:45.852 }, 00:31:45.852 { 00:31:45.852 "name": "BaseBdev3", 00:31:45.852 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:45.852 "is_configured": true, 00:31:45.852 "data_offset": 2048, 00:31:45.852 "data_size": 63488 00:31:45.852 }, 00:31:45.852 { 00:31:45.852 "name": "BaseBdev4", 00:31:45.852 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:45.852 "is_configured": true, 00:31:45.852 "data_offset": 2048, 00:31:45.852 "data_size": 63488 00:31:45.852 } 00:31:45.852 ] 00:31:45.852 }' 00:31:45.852 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:45.852 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:46.111 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:46.111 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:46.111 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:46.111 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:46.111 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:46.111 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:46.111 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.111 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:46.111 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:46.111 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.111 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:46.111 "name": "raid_bdev1", 00:31:46.111 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:46.111 "strip_size_kb": 64, 00:31:46.111 "state": "online", 00:31:46.111 "raid_level": "raid5f", 00:31:46.111 "superblock": true, 00:31:46.111 "num_base_bdevs": 4, 00:31:46.111 "num_base_bdevs_discovered": 3, 00:31:46.111 "num_base_bdevs_operational": 3, 00:31:46.111 "base_bdevs_list": [ 00:31:46.111 { 00:31:46.111 "name": null, 00:31:46.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:46.111 "is_configured": false, 00:31:46.111 "data_offset": 0, 00:31:46.111 "data_size": 63488 00:31:46.111 }, 00:31:46.111 { 00:31:46.111 "name": "BaseBdev2", 00:31:46.111 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:46.111 "is_configured": true, 00:31:46.111 "data_offset": 2048, 00:31:46.111 "data_size": 63488 00:31:46.111 }, 00:31:46.111 { 00:31:46.111 "name": "BaseBdev3", 00:31:46.111 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:46.111 "is_configured": true, 00:31:46.111 "data_offset": 2048, 00:31:46.111 "data_size": 63488 00:31:46.111 }, 00:31:46.112 { 00:31:46.112 "name": "BaseBdev4", 00:31:46.112 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:46.112 "is_configured": true, 00:31:46.112 "data_offset": 2048, 00:31:46.112 "data_size": 63488 00:31:46.112 } 00:31:46.112 ] 00:31:46.112 }' 00:31:46.112 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:46.112 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:46.112 13:51:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:46.372 13:51:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:46.372 13:51:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:46.372 13:51:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:31:46.372 13:51:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:46.372 13:51:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:46.372 13:51:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:46.372 13:51:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:46.372 13:51:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:46.372 13:51:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:46.372 13:51:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.372 13:51:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:46.372 [2024-11-20 13:51:49.036996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:46.372 [2024-11-20 13:51:49.037214] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:46.372 [2024-11-20 13:51:49.037237] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:46.372 request: 00:31:46.372 { 00:31:46.372 "base_bdev": "BaseBdev1", 00:31:46.372 "raid_bdev": "raid_bdev1", 00:31:46.372 "method": "bdev_raid_add_base_bdev", 00:31:46.372 "req_id": 1 00:31:46.372 } 00:31:46.372 Got JSON-RPC error response 00:31:46.372 response: 00:31:46.372 { 00:31:46.372 "code": -22, 00:31:46.372 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:31:46.372 } 00:31:46.372 13:51:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:46.372 13:51:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:31:46.372 13:51:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:46.372 13:51:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:46.372 13:51:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:46.372 13:51:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:31:47.309 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:47.309 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:47.309 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:47.309 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:31:47.309 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:47.309 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:47.309 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:47.309 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:47.309 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:47.309 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:47.309 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:47.309 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:47.309 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.309 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:47.309 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.309 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:47.309 "name": "raid_bdev1", 00:31:47.309 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:47.309 "strip_size_kb": 64, 00:31:47.309 "state": "online", 00:31:47.309 "raid_level": "raid5f", 00:31:47.309 "superblock": true, 00:31:47.309 "num_base_bdevs": 4, 00:31:47.309 "num_base_bdevs_discovered": 3, 00:31:47.309 "num_base_bdevs_operational": 3, 00:31:47.309 "base_bdevs_list": [ 00:31:47.309 { 00:31:47.309 "name": null, 00:31:47.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:47.309 "is_configured": false, 00:31:47.309 "data_offset": 0, 00:31:47.309 "data_size": 63488 00:31:47.309 }, 00:31:47.309 { 00:31:47.309 "name": "BaseBdev2", 00:31:47.309 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:47.309 "is_configured": true, 00:31:47.309 "data_offset": 2048, 00:31:47.309 "data_size": 63488 00:31:47.309 }, 00:31:47.309 { 00:31:47.309 "name": "BaseBdev3", 00:31:47.309 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:47.309 "is_configured": true, 00:31:47.309 "data_offset": 2048, 00:31:47.309 "data_size": 63488 00:31:47.309 }, 00:31:47.309 { 00:31:47.309 "name": "BaseBdev4", 00:31:47.309 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:47.309 "is_configured": true, 00:31:47.309 "data_offset": 2048, 00:31:47.309 "data_size": 63488 00:31:47.309 } 00:31:47.309 ] 00:31:47.309 }' 00:31:47.309 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:47.309 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:47.878 "name": "raid_bdev1", 00:31:47.878 "uuid": "07eb9030-6bd2-4c13-99e4-05fabdb59198", 00:31:47.878 "strip_size_kb": 64, 00:31:47.878 "state": "online", 00:31:47.878 "raid_level": "raid5f", 00:31:47.878 "superblock": true, 00:31:47.878 "num_base_bdevs": 4, 00:31:47.878 "num_base_bdevs_discovered": 3, 00:31:47.878 "num_base_bdevs_operational": 3, 00:31:47.878 "base_bdevs_list": [ 00:31:47.878 { 00:31:47.878 "name": null, 00:31:47.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:47.878 "is_configured": false, 00:31:47.878 "data_offset": 0, 00:31:47.878 "data_size": 63488 00:31:47.878 }, 00:31:47.878 { 00:31:47.878 "name": "BaseBdev2", 00:31:47.878 "uuid": "a0f55839-770b-52bc-ae2e-353d5a7161d0", 00:31:47.878 "is_configured": true, 00:31:47.878 "data_offset": 2048, 00:31:47.878 "data_size": 63488 00:31:47.878 }, 00:31:47.878 { 00:31:47.878 "name": "BaseBdev3", 00:31:47.878 "uuid": "ff5f150f-61aa-532a-bb23-b0766569b1e7", 00:31:47.878 "is_configured": true, 00:31:47.878 "data_offset": 2048, 00:31:47.878 "data_size": 63488 00:31:47.878 }, 00:31:47.878 { 00:31:47.878 "name": "BaseBdev4", 00:31:47.878 "uuid": "58681291-8d84-5d6c-bc3c-088e12f22889", 00:31:47.878 "is_configured": true, 00:31:47.878 "data_offset": 2048, 00:31:47.878 "data_size": 63488 00:31:47.878 } 00:31:47.878 ] 00:31:47.878 }' 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85763 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85763 ']' 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85763 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85763 00:31:47.878 killing process with pid 85763 00:31:47.878 Received shutdown signal, test time was about 60.000000 seconds 00:31:47.878 00:31:47.878 Latency(us) 00:31:47.878 [2024-11-20T13:51:50.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:47.878 [2024-11-20T13:51:50.795Z] =================================================================================================================== 00:31:47.878 [2024-11-20T13:51:50.795Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85763' 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85763 00:31:47.878 [2024-11-20 13:51:50.758866] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:47.878 13:51:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85763 00:31:47.878 [2024-11-20 13:51:50.759030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:47.878 [2024-11-20 13:51:50.759136] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:47.878 [2024-11-20 13:51:50.759158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:31:48.447 [2024-11-20 13:51:51.197981] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:49.383 ************************************ 00:31:49.383 END TEST raid5f_rebuild_test_sb 00:31:49.383 ************************************ 00:31:49.383 13:51:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:31:49.383 00:31:49.383 real 0m28.817s 00:31:49.383 user 0m37.639s 00:31:49.383 sys 0m2.885s 00:31:49.383 13:51:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:49.384 13:51:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:49.384 13:51:52 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:31:49.384 13:51:52 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:31:49.384 13:51:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:31:49.384 13:51:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:49.384 13:51:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:49.384 ************************************ 00:31:49.384 START TEST raid_state_function_test_sb_4k 00:31:49.384 ************************************ 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:31:49.384 Process raid pid: 86586 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86586 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86586' 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86586 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86586 ']' 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:49.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:49.384 13:51:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:49.643 [2024-11-20 13:51:52.395330] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:31:49.643 [2024-11-20 13:51:52.395704] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:49.901 [2024-11-20 13:51:52.581388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:49.901 [2024-11-20 13:51:52.712562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.160 [2024-11-20 13:51:52.919529] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:50.160 [2024-11-20 13:51:52.919566] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:50.728 [2024-11-20 13:51:53.382120] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:50.728 [2024-11-20 13:51:53.382185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:50.728 [2024-11-20 13:51:53.382204] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:50.728 [2024-11-20 13:51:53.382220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:50.728 "name": "Existed_Raid", 00:31:50.728 "uuid": "f82f4dea-8010-4100-bd86-c4e845586352", 00:31:50.728 "strip_size_kb": 0, 00:31:50.728 "state": "configuring", 00:31:50.728 "raid_level": "raid1", 00:31:50.728 "superblock": true, 00:31:50.728 "num_base_bdevs": 2, 00:31:50.728 "num_base_bdevs_discovered": 0, 00:31:50.728 "num_base_bdevs_operational": 2, 00:31:50.728 "base_bdevs_list": [ 00:31:50.728 { 00:31:50.728 "name": "BaseBdev1", 00:31:50.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:50.728 "is_configured": false, 00:31:50.728 "data_offset": 0, 00:31:50.728 "data_size": 0 00:31:50.728 }, 00:31:50.728 { 00:31:50.728 "name": "BaseBdev2", 00:31:50.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:50.728 "is_configured": false, 00:31:50.728 "data_offset": 0, 00:31:50.728 "data_size": 0 00:31:50.728 } 00:31:50.728 ] 00:31:50.728 }' 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:50.728 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:51.296 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:51.296 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.296 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:51.296 [2024-11-20 13:51:53.918196] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:51.296 [2024-11-20 13:51:53.918239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:31:51.296 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.296 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:31:51.296 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.296 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:51.296 [2024-11-20 13:51:53.930186] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:51.296 [2024-11-20 13:51:53.930365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:51.296 [2024-11-20 13:51:53.930483] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:51.296 [2024-11-20 13:51:53.930545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:51.296 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.296 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:31:51.296 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.296 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:51.296 [2024-11-20 13:51:53.979623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:51.296 BaseBdev1 00:31:51.296 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.296 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:31:51.296 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:31:51.296 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:51.296 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:31:51.296 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:51.296 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:51.296 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:51.296 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.296 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:51.297 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.297 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:51.297 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.297 13:51:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:51.297 [ 00:31:51.297 { 00:31:51.297 "name": "BaseBdev1", 00:31:51.297 "aliases": [ 00:31:51.297 "89873187-feab-429b-8d1a-a270f7603f45" 00:31:51.297 ], 00:31:51.297 "product_name": "Malloc disk", 00:31:51.297 "block_size": 4096, 00:31:51.297 "num_blocks": 8192, 00:31:51.297 "uuid": "89873187-feab-429b-8d1a-a270f7603f45", 00:31:51.297 "assigned_rate_limits": { 00:31:51.297 "rw_ios_per_sec": 0, 00:31:51.297 "rw_mbytes_per_sec": 0, 00:31:51.297 "r_mbytes_per_sec": 0, 00:31:51.297 "w_mbytes_per_sec": 0 00:31:51.297 }, 00:31:51.297 "claimed": true, 00:31:51.297 "claim_type": "exclusive_write", 00:31:51.297 "zoned": false, 00:31:51.297 "supported_io_types": { 00:31:51.297 "read": true, 00:31:51.297 "write": true, 00:31:51.297 "unmap": true, 00:31:51.297 "flush": true, 00:31:51.297 "reset": true, 00:31:51.297 "nvme_admin": false, 00:31:51.297 "nvme_io": false, 00:31:51.297 "nvme_io_md": false, 00:31:51.297 "write_zeroes": true, 00:31:51.297 "zcopy": true, 00:31:51.297 "get_zone_info": false, 00:31:51.297 "zone_management": false, 00:31:51.297 "zone_append": false, 00:31:51.297 "compare": false, 00:31:51.297 "compare_and_write": false, 00:31:51.297 "abort": true, 00:31:51.297 "seek_hole": false, 00:31:51.297 "seek_data": false, 00:31:51.297 "copy": true, 00:31:51.297 "nvme_iov_md": false 00:31:51.297 }, 00:31:51.297 "memory_domains": [ 00:31:51.297 { 00:31:51.297 "dma_device_id": "system", 00:31:51.297 "dma_device_type": 1 00:31:51.297 }, 00:31:51.297 { 00:31:51.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:51.297 "dma_device_type": 2 00:31:51.297 } 00:31:51.297 ], 00:31:51.297 "driver_specific": {} 00:31:51.297 } 00:31:51.297 ] 00:31:51.297 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.297 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:31:51.297 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:51.297 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:51.297 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:51.297 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:51.297 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:51.297 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:51.297 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:51.297 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:51.297 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:51.297 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:51.297 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:51.297 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:51.297 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.297 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:51.297 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.297 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:51.297 "name": "Existed_Raid", 00:31:51.297 "uuid": "f2033b0b-af35-4f91-8a77-6ff7523ca845", 00:31:51.297 "strip_size_kb": 0, 00:31:51.297 "state": "configuring", 00:31:51.297 "raid_level": "raid1", 00:31:51.297 "superblock": true, 00:31:51.297 "num_base_bdevs": 2, 00:31:51.297 "num_base_bdevs_discovered": 1, 00:31:51.297 "num_base_bdevs_operational": 2, 00:31:51.297 "base_bdevs_list": [ 00:31:51.297 { 00:31:51.297 "name": "BaseBdev1", 00:31:51.297 "uuid": "89873187-feab-429b-8d1a-a270f7603f45", 00:31:51.297 "is_configured": true, 00:31:51.297 "data_offset": 256, 00:31:51.297 "data_size": 7936 00:31:51.297 }, 00:31:51.297 { 00:31:51.297 "name": "BaseBdev2", 00:31:51.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:51.297 "is_configured": false, 00:31:51.297 "data_offset": 0, 00:31:51.297 "data_size": 0 00:31:51.297 } 00:31:51.297 ] 00:31:51.297 }' 00:31:51.297 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:51.297 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:51.865 [2024-11-20 13:51:54.547813] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:51.865 [2024-11-20 13:51:54.548030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:51.865 [2024-11-20 13:51:54.559853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:51.865 [2024-11-20 13:51:54.562474] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:51.865 [2024-11-20 13:51:54.562671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:51.865 "name": "Existed_Raid", 00:31:51.865 "uuid": "13dbf1b4-1a96-4ac8-b48b-5eae877d66f6", 00:31:51.865 "strip_size_kb": 0, 00:31:51.865 "state": "configuring", 00:31:51.865 "raid_level": "raid1", 00:31:51.865 "superblock": true, 00:31:51.865 "num_base_bdevs": 2, 00:31:51.865 "num_base_bdevs_discovered": 1, 00:31:51.865 "num_base_bdevs_operational": 2, 00:31:51.865 "base_bdevs_list": [ 00:31:51.865 { 00:31:51.865 "name": "BaseBdev1", 00:31:51.865 "uuid": "89873187-feab-429b-8d1a-a270f7603f45", 00:31:51.865 "is_configured": true, 00:31:51.865 "data_offset": 256, 00:31:51.865 "data_size": 7936 00:31:51.865 }, 00:31:51.865 { 00:31:51.865 "name": "BaseBdev2", 00:31:51.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:51.865 "is_configured": false, 00:31:51.865 "data_offset": 0, 00:31:51.865 "data_size": 0 00:31:51.865 } 00:31:51.865 ] 00:31:51.865 }' 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:51.865 13:51:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:52.487 [2024-11-20 13:51:55.139919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:52.487 [2024-11-20 13:51:55.140284] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:52.487 [2024-11-20 13:51:55.140302] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:52.487 [2024-11-20 13:51:55.140643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:31:52.487 BaseBdev2 00:31:52.487 [2024-11-20 13:51:55.140850] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:52.487 [2024-11-20 13:51:55.140874] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:31:52.487 [2024-11-20 13:51:55.141078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:52.487 [ 00:31:52.487 { 00:31:52.487 "name": "BaseBdev2", 00:31:52.487 "aliases": [ 00:31:52.487 "125a844c-b06b-40dc-8dbb-6cd9901aac22" 00:31:52.487 ], 00:31:52.487 "product_name": "Malloc disk", 00:31:52.487 "block_size": 4096, 00:31:52.487 "num_blocks": 8192, 00:31:52.487 "uuid": "125a844c-b06b-40dc-8dbb-6cd9901aac22", 00:31:52.487 "assigned_rate_limits": { 00:31:52.487 "rw_ios_per_sec": 0, 00:31:52.487 "rw_mbytes_per_sec": 0, 00:31:52.487 "r_mbytes_per_sec": 0, 00:31:52.487 "w_mbytes_per_sec": 0 00:31:52.487 }, 00:31:52.487 "claimed": true, 00:31:52.487 "claim_type": "exclusive_write", 00:31:52.487 "zoned": false, 00:31:52.487 "supported_io_types": { 00:31:52.487 "read": true, 00:31:52.487 "write": true, 00:31:52.487 "unmap": true, 00:31:52.487 "flush": true, 00:31:52.487 "reset": true, 00:31:52.487 "nvme_admin": false, 00:31:52.487 "nvme_io": false, 00:31:52.487 "nvme_io_md": false, 00:31:52.487 "write_zeroes": true, 00:31:52.487 "zcopy": true, 00:31:52.487 "get_zone_info": false, 00:31:52.487 "zone_management": false, 00:31:52.487 "zone_append": false, 00:31:52.487 "compare": false, 00:31:52.487 "compare_and_write": false, 00:31:52.487 "abort": true, 00:31:52.487 "seek_hole": false, 00:31:52.487 "seek_data": false, 00:31:52.487 "copy": true, 00:31:52.487 "nvme_iov_md": false 00:31:52.487 }, 00:31:52.487 "memory_domains": [ 00:31:52.487 { 00:31:52.487 "dma_device_id": "system", 00:31:52.487 "dma_device_type": 1 00:31:52.487 }, 00:31:52.487 { 00:31:52.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:52.487 "dma_device_type": 2 00:31:52.487 } 00:31:52.487 ], 00:31:52.487 "driver_specific": {} 00:31:52.487 } 00:31:52.487 ] 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:52.487 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:52.488 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:52.488 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:52.488 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:52.488 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:52.488 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:52.488 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:52.488 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.488 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:52.488 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.488 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:52.488 "name": "Existed_Raid", 00:31:52.488 "uuid": "13dbf1b4-1a96-4ac8-b48b-5eae877d66f6", 00:31:52.488 "strip_size_kb": 0, 00:31:52.488 "state": "online", 00:31:52.488 "raid_level": "raid1", 00:31:52.488 "superblock": true, 00:31:52.488 "num_base_bdevs": 2, 00:31:52.488 "num_base_bdevs_discovered": 2, 00:31:52.488 "num_base_bdevs_operational": 2, 00:31:52.488 "base_bdevs_list": [ 00:31:52.488 { 00:31:52.488 "name": "BaseBdev1", 00:31:52.488 "uuid": "89873187-feab-429b-8d1a-a270f7603f45", 00:31:52.488 "is_configured": true, 00:31:52.488 "data_offset": 256, 00:31:52.488 "data_size": 7936 00:31:52.488 }, 00:31:52.488 { 00:31:52.488 "name": "BaseBdev2", 00:31:52.488 "uuid": "125a844c-b06b-40dc-8dbb-6cd9901aac22", 00:31:52.488 "is_configured": true, 00:31:52.488 "data_offset": 256, 00:31:52.488 "data_size": 7936 00:31:52.488 } 00:31:52.488 ] 00:31:52.488 }' 00:31:52.488 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:52.488 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:53.057 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:31:53.057 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:53.057 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:53.057 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:53.057 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:31:53.057 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:53.057 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:53.057 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:53.057 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.057 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:53.057 [2024-11-20 13:51:55.720489] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:53.057 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.057 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:53.057 "name": "Existed_Raid", 00:31:53.057 "aliases": [ 00:31:53.057 "13dbf1b4-1a96-4ac8-b48b-5eae877d66f6" 00:31:53.057 ], 00:31:53.057 "product_name": "Raid Volume", 00:31:53.057 "block_size": 4096, 00:31:53.057 "num_blocks": 7936, 00:31:53.057 "uuid": "13dbf1b4-1a96-4ac8-b48b-5eae877d66f6", 00:31:53.057 "assigned_rate_limits": { 00:31:53.057 "rw_ios_per_sec": 0, 00:31:53.057 "rw_mbytes_per_sec": 0, 00:31:53.057 "r_mbytes_per_sec": 0, 00:31:53.057 "w_mbytes_per_sec": 0 00:31:53.057 }, 00:31:53.057 "claimed": false, 00:31:53.057 "zoned": false, 00:31:53.057 "supported_io_types": { 00:31:53.057 "read": true, 00:31:53.057 "write": true, 00:31:53.057 "unmap": false, 00:31:53.057 "flush": false, 00:31:53.057 "reset": true, 00:31:53.057 "nvme_admin": false, 00:31:53.057 "nvme_io": false, 00:31:53.057 "nvme_io_md": false, 00:31:53.057 "write_zeroes": true, 00:31:53.057 "zcopy": false, 00:31:53.057 "get_zone_info": false, 00:31:53.057 "zone_management": false, 00:31:53.057 "zone_append": false, 00:31:53.057 "compare": false, 00:31:53.057 "compare_and_write": false, 00:31:53.057 "abort": false, 00:31:53.057 "seek_hole": false, 00:31:53.057 "seek_data": false, 00:31:53.057 "copy": false, 00:31:53.057 "nvme_iov_md": false 00:31:53.057 }, 00:31:53.057 "memory_domains": [ 00:31:53.057 { 00:31:53.057 "dma_device_id": "system", 00:31:53.057 "dma_device_type": 1 00:31:53.057 }, 00:31:53.057 { 00:31:53.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:53.057 "dma_device_type": 2 00:31:53.057 }, 00:31:53.057 { 00:31:53.057 "dma_device_id": "system", 00:31:53.057 "dma_device_type": 1 00:31:53.057 }, 00:31:53.057 { 00:31:53.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:53.057 "dma_device_type": 2 00:31:53.058 } 00:31:53.058 ], 00:31:53.058 "driver_specific": { 00:31:53.058 "raid": { 00:31:53.058 "uuid": "13dbf1b4-1a96-4ac8-b48b-5eae877d66f6", 00:31:53.058 "strip_size_kb": 0, 00:31:53.058 "state": "online", 00:31:53.058 "raid_level": "raid1", 00:31:53.058 "superblock": true, 00:31:53.058 "num_base_bdevs": 2, 00:31:53.058 "num_base_bdevs_discovered": 2, 00:31:53.058 "num_base_bdevs_operational": 2, 00:31:53.058 "base_bdevs_list": [ 00:31:53.058 { 00:31:53.058 "name": "BaseBdev1", 00:31:53.058 "uuid": "89873187-feab-429b-8d1a-a270f7603f45", 00:31:53.058 "is_configured": true, 00:31:53.058 "data_offset": 256, 00:31:53.058 "data_size": 7936 00:31:53.058 }, 00:31:53.058 { 00:31:53.058 "name": "BaseBdev2", 00:31:53.058 "uuid": "125a844c-b06b-40dc-8dbb-6cd9901aac22", 00:31:53.058 "is_configured": true, 00:31:53.058 "data_offset": 256, 00:31:53.058 "data_size": 7936 00:31:53.058 } 00:31:53.058 ] 00:31:53.058 } 00:31:53.058 } 00:31:53.058 }' 00:31:53.058 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:53.058 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:31:53.058 BaseBdev2' 00:31:53.058 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:53.058 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:31:53.058 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:53.058 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:31:53.058 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.058 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:53.058 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:53.058 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.058 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:31:53.058 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:31:53.058 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:53.058 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:53.058 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:53.058 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.058 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:53.058 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.317 13:51:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:53.317 [2024-11-20 13:51:56.004284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:53.317 "name": "Existed_Raid", 00:31:53.317 "uuid": "13dbf1b4-1a96-4ac8-b48b-5eae877d66f6", 00:31:53.317 "strip_size_kb": 0, 00:31:53.317 "state": "online", 00:31:53.317 "raid_level": "raid1", 00:31:53.317 "superblock": true, 00:31:53.317 "num_base_bdevs": 2, 00:31:53.317 "num_base_bdevs_discovered": 1, 00:31:53.317 "num_base_bdevs_operational": 1, 00:31:53.317 "base_bdevs_list": [ 00:31:53.317 { 00:31:53.317 "name": null, 00:31:53.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:53.317 "is_configured": false, 00:31:53.317 "data_offset": 0, 00:31:53.317 "data_size": 7936 00:31:53.317 }, 00:31:53.317 { 00:31:53.317 "name": "BaseBdev2", 00:31:53.317 "uuid": "125a844c-b06b-40dc-8dbb-6cd9901aac22", 00:31:53.317 "is_configured": true, 00:31:53.317 "data_offset": 256, 00:31:53.317 "data_size": 7936 00:31:53.317 } 00:31:53.317 ] 00:31:53.317 }' 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:53.317 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:53.915 [2024-11-20 13:51:56.679156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:53.915 [2024-11-20 13:51:56.679306] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:53.915 [2024-11-20 13:51:56.767195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:53.915 [2024-11-20 13:51:56.767270] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:53.915 [2024-11-20 13:51:56.767289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86586 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86586 ']' 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86586 00:31:53.915 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:31:54.173 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:54.173 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86586 00:31:54.173 killing process with pid 86586 00:31:54.173 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:54.173 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:54.173 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86586' 00:31:54.173 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86586 00:31:54.173 [2024-11-20 13:51:56.859488] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:54.173 13:51:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86586 00:31:54.173 [2024-11-20 13:51:56.875398] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:55.109 13:51:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:31:55.109 00:31:55.109 real 0m5.669s 00:31:55.109 user 0m8.525s 00:31:55.109 sys 0m0.865s 00:31:55.109 13:51:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:55.109 13:51:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:55.109 ************************************ 00:31:55.109 END TEST raid_state_function_test_sb_4k 00:31:55.109 ************************************ 00:31:55.109 13:51:57 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:31:55.109 13:51:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:55.109 13:51:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:55.109 13:51:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:55.109 ************************************ 00:31:55.109 START TEST raid_superblock_test_4k 00:31:55.109 ************************************ 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86838 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86838 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86838 ']' 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:55.109 13:51:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:55.368 [2024-11-20 13:51:58.102639] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:31:55.368 [2024-11-20 13:51:58.102814] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86838 ] 00:31:55.368 [2024-11-20 13:51:58.278498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.626 [2024-11-20 13:51:58.417244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.884 [2024-11-20 13:51:58.629501] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:55.884 [2024-11-20 13:51:58.629545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:56.450 malloc1 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:56.450 [2024-11-20 13:51:59.173895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:56.450 [2024-11-20 13:51:59.174125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:56.450 [2024-11-20 13:51:59.174205] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:56.450 [2024-11-20 13:51:59.174334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:56.450 [2024-11-20 13:51:59.177352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:56.450 [2024-11-20 13:51:59.177541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:56.450 pt1 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:56.450 malloc2 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:56.450 [2024-11-20 13:51:59.231171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:56.450 [2024-11-20 13:51:59.231253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:56.450 [2024-11-20 13:51:59.231303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:31:56.450 [2024-11-20 13:51:59.231317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:56.450 [2024-11-20 13:51:59.234272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:56.450 [2024-11-20 13:51:59.234343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:56.450 pt2 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:56.450 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:56.451 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:31:56.451 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.451 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:56.451 [2024-11-20 13:51:59.243236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:56.451 [2024-11-20 13:51:59.245824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:56.451 [2024-11-20 13:51:59.246122] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:56.451 [2024-11-20 13:51:59.246146] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:56.451 [2024-11-20 13:51:59.246460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:31:56.451 [2024-11-20 13:51:59.246678] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:56.451 [2024-11-20 13:51:59.246703] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:31:56.451 [2024-11-20 13:51:59.246888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:56.451 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.451 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:56.451 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:56.451 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:56.451 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:56.451 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:56.451 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:56.451 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:56.451 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:56.451 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:56.451 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:56.451 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:56.451 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:56.451 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.451 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:56.451 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.451 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:56.451 "name": "raid_bdev1", 00:31:56.451 "uuid": "5b21a766-0fbd-432f-94c8-8fe005ce6b39", 00:31:56.451 "strip_size_kb": 0, 00:31:56.451 "state": "online", 00:31:56.451 "raid_level": "raid1", 00:31:56.451 "superblock": true, 00:31:56.451 "num_base_bdevs": 2, 00:31:56.451 "num_base_bdevs_discovered": 2, 00:31:56.451 "num_base_bdevs_operational": 2, 00:31:56.451 "base_bdevs_list": [ 00:31:56.451 { 00:31:56.451 "name": "pt1", 00:31:56.451 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:56.451 "is_configured": true, 00:31:56.451 "data_offset": 256, 00:31:56.451 "data_size": 7936 00:31:56.451 }, 00:31:56.451 { 00:31:56.451 "name": "pt2", 00:31:56.451 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:56.451 "is_configured": true, 00:31:56.451 "data_offset": 256, 00:31:56.451 "data_size": 7936 00:31:56.451 } 00:31:56.451 ] 00:31:56.451 }' 00:31:56.451 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:56.451 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.018 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:31:57.018 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:31:57.018 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:57.018 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:57.018 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:31:57.018 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:57.018 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:57.018 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.018 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.018 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:57.018 [2024-11-20 13:51:59.751917] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:57.018 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.018 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:57.018 "name": "raid_bdev1", 00:31:57.018 "aliases": [ 00:31:57.018 "5b21a766-0fbd-432f-94c8-8fe005ce6b39" 00:31:57.018 ], 00:31:57.018 "product_name": "Raid Volume", 00:31:57.018 "block_size": 4096, 00:31:57.018 "num_blocks": 7936, 00:31:57.018 "uuid": "5b21a766-0fbd-432f-94c8-8fe005ce6b39", 00:31:57.018 "assigned_rate_limits": { 00:31:57.018 "rw_ios_per_sec": 0, 00:31:57.018 "rw_mbytes_per_sec": 0, 00:31:57.018 "r_mbytes_per_sec": 0, 00:31:57.018 "w_mbytes_per_sec": 0 00:31:57.018 }, 00:31:57.018 "claimed": false, 00:31:57.018 "zoned": false, 00:31:57.018 "supported_io_types": { 00:31:57.018 "read": true, 00:31:57.018 "write": true, 00:31:57.018 "unmap": false, 00:31:57.018 "flush": false, 00:31:57.018 "reset": true, 00:31:57.018 "nvme_admin": false, 00:31:57.018 "nvme_io": false, 00:31:57.018 "nvme_io_md": false, 00:31:57.018 "write_zeroes": true, 00:31:57.018 "zcopy": false, 00:31:57.018 "get_zone_info": false, 00:31:57.018 "zone_management": false, 00:31:57.018 "zone_append": false, 00:31:57.018 "compare": false, 00:31:57.019 "compare_and_write": false, 00:31:57.019 "abort": false, 00:31:57.019 "seek_hole": false, 00:31:57.019 "seek_data": false, 00:31:57.019 "copy": false, 00:31:57.019 "nvme_iov_md": false 00:31:57.019 }, 00:31:57.019 "memory_domains": [ 00:31:57.019 { 00:31:57.019 "dma_device_id": "system", 00:31:57.019 "dma_device_type": 1 00:31:57.019 }, 00:31:57.019 { 00:31:57.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:57.019 "dma_device_type": 2 00:31:57.019 }, 00:31:57.019 { 00:31:57.019 "dma_device_id": "system", 00:31:57.019 "dma_device_type": 1 00:31:57.019 }, 00:31:57.019 { 00:31:57.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:57.019 "dma_device_type": 2 00:31:57.019 } 00:31:57.019 ], 00:31:57.019 "driver_specific": { 00:31:57.019 "raid": { 00:31:57.019 "uuid": "5b21a766-0fbd-432f-94c8-8fe005ce6b39", 00:31:57.019 "strip_size_kb": 0, 00:31:57.019 "state": "online", 00:31:57.019 "raid_level": "raid1", 00:31:57.019 "superblock": true, 00:31:57.019 "num_base_bdevs": 2, 00:31:57.019 "num_base_bdevs_discovered": 2, 00:31:57.019 "num_base_bdevs_operational": 2, 00:31:57.019 "base_bdevs_list": [ 00:31:57.019 { 00:31:57.019 "name": "pt1", 00:31:57.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:57.019 "is_configured": true, 00:31:57.019 "data_offset": 256, 00:31:57.019 "data_size": 7936 00:31:57.019 }, 00:31:57.019 { 00:31:57.019 "name": "pt2", 00:31:57.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:57.019 "is_configured": true, 00:31:57.019 "data_offset": 256, 00:31:57.019 "data_size": 7936 00:31:57.019 } 00:31:57.019 ] 00:31:57.019 } 00:31:57.019 } 00:31:57.019 }' 00:31:57.019 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:57.019 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:31:57.019 pt2' 00:31:57.019 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:57.019 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:31:57.019 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:57.019 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:31:57.019 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:57.019 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.019 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.019 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.278 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:31:57.278 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:31:57.278 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:57.278 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:31:57.278 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.278 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.278 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:57.278 13:51:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.278 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:31:57.278 13:51:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:31:57.278 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:57.278 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:31:57.278 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.278 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.278 [2024-11-20 13:52:00.007906] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:57.278 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.278 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5b21a766-0fbd-432f-94c8-8fe005ce6b39 00:31:57.278 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 5b21a766-0fbd-432f-94c8-8fe005ce6b39 ']' 00:31:57.278 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:57.278 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.278 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.278 [2024-11-20 13:52:00.059524] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:57.278 [2024-11-20 13:52:00.059568] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:57.278 [2024-11-20 13:52:00.059687] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:57.278 [2024-11-20 13:52:00.059765] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:57.278 [2024-11-20 13:52:00.059785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:31:57.278 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.278 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:57.278 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.278 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.278 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:31:57.278 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.278 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:31:57.278 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:31:57.278 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:57.278 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:31:57.278 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.278 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.278 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.279 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:57.279 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:31:57.279 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.279 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.279 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.279 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:31:57.279 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.279 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.279 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:31:57.279 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.547 [2024-11-20 13:52:00.203641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:57.547 [2024-11-20 13:52:00.206518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:57.547 [2024-11-20 13:52:00.206759] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:31:57.547 [2024-11-20 13:52:00.207008] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:31:57.547 [2024-11-20 13:52:00.207241] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:57.547 [2024-11-20 13:52:00.207391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:31:57.547 request: 00:31:57.547 { 00:31:57.547 "name": "raid_bdev1", 00:31:57.547 "raid_level": "raid1", 00:31:57.547 "base_bdevs": [ 00:31:57.547 "malloc1", 00:31:57.547 "malloc2" 00:31:57.547 ], 00:31:57.547 "superblock": false, 00:31:57.547 "method": "bdev_raid_create", 00:31:57.547 "req_id": 1 00:31:57.547 } 00:31:57.547 Got JSON-RPC error response 00:31:57.547 response: 00:31:57.547 { 00:31:57.547 "code": -17, 00:31:57.547 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:31:57.547 } 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.547 [2024-11-20 13:52:00.271794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:57.547 [2024-11-20 13:52:00.271882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:57.547 [2024-11-20 13:52:00.271928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:57.547 [2024-11-20 13:52:00.271947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:57.547 [2024-11-20 13:52:00.275102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:57.547 [2024-11-20 13:52:00.275164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:57.547 [2024-11-20 13:52:00.275303] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:57.547 [2024-11-20 13:52:00.275403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:57.547 pt1 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:57.547 "name": "raid_bdev1", 00:31:57.547 "uuid": "5b21a766-0fbd-432f-94c8-8fe005ce6b39", 00:31:57.547 "strip_size_kb": 0, 00:31:57.547 "state": "configuring", 00:31:57.547 "raid_level": "raid1", 00:31:57.547 "superblock": true, 00:31:57.547 "num_base_bdevs": 2, 00:31:57.547 "num_base_bdevs_discovered": 1, 00:31:57.547 "num_base_bdevs_operational": 2, 00:31:57.547 "base_bdevs_list": [ 00:31:57.547 { 00:31:57.547 "name": "pt1", 00:31:57.547 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:57.547 "is_configured": true, 00:31:57.547 "data_offset": 256, 00:31:57.547 "data_size": 7936 00:31:57.547 }, 00:31:57.547 { 00:31:57.547 "name": null, 00:31:57.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:57.547 "is_configured": false, 00:31:57.547 "data_offset": 256, 00:31:57.547 "data_size": 7936 00:31:57.547 } 00:31:57.547 ] 00:31:57.547 }' 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:57.547 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:58.115 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:31:58.115 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:31:58.115 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:58.115 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:58.115 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.115 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:58.115 [2024-11-20 13:52:00.807990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:58.115 [2024-11-20 13:52:00.808266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:58.115 [2024-11-20 13:52:00.808307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:31:58.115 [2024-11-20 13:52:00.808327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:58.115 [2024-11-20 13:52:00.808998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:58.115 [2024-11-20 13:52:00.809035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:58.115 [2024-11-20 13:52:00.809160] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:58.115 [2024-11-20 13:52:00.809201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:58.115 [2024-11-20 13:52:00.809353] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:58.115 [2024-11-20 13:52:00.809381] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:58.115 [2024-11-20 13:52:00.809685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:58.115 [2024-11-20 13:52:00.809870] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:58.115 [2024-11-20 13:52:00.809891] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:31:58.115 [2024-11-20 13:52:00.810091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:58.115 pt2 00:31:58.115 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.115 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:31:58.116 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:58.116 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:58.116 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:58.116 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:58.116 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:58.116 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:58.116 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:58.116 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:58.116 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:58.116 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:58.116 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:58.116 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:58.116 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.116 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:58.116 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:58.116 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.116 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:58.116 "name": "raid_bdev1", 00:31:58.116 "uuid": "5b21a766-0fbd-432f-94c8-8fe005ce6b39", 00:31:58.116 "strip_size_kb": 0, 00:31:58.116 "state": "online", 00:31:58.116 "raid_level": "raid1", 00:31:58.116 "superblock": true, 00:31:58.116 "num_base_bdevs": 2, 00:31:58.116 "num_base_bdevs_discovered": 2, 00:31:58.116 "num_base_bdevs_operational": 2, 00:31:58.116 "base_bdevs_list": [ 00:31:58.116 { 00:31:58.116 "name": "pt1", 00:31:58.116 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:58.116 "is_configured": true, 00:31:58.116 "data_offset": 256, 00:31:58.116 "data_size": 7936 00:31:58.116 }, 00:31:58.116 { 00:31:58.116 "name": "pt2", 00:31:58.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:58.116 "is_configured": true, 00:31:58.116 "data_offset": 256, 00:31:58.116 "data_size": 7936 00:31:58.116 } 00:31:58.116 ] 00:31:58.116 }' 00:31:58.116 13:52:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:58.116 13:52:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:58.684 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:31:58.684 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:31:58.684 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:58.684 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:58.684 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:31:58.684 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:58.684 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:58.684 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:58.684 13:52:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.684 13:52:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:58.684 [2024-11-20 13:52:01.348406] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:58.684 13:52:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.684 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:58.684 "name": "raid_bdev1", 00:31:58.684 "aliases": [ 00:31:58.684 "5b21a766-0fbd-432f-94c8-8fe005ce6b39" 00:31:58.684 ], 00:31:58.684 "product_name": "Raid Volume", 00:31:58.684 "block_size": 4096, 00:31:58.684 "num_blocks": 7936, 00:31:58.684 "uuid": "5b21a766-0fbd-432f-94c8-8fe005ce6b39", 00:31:58.684 "assigned_rate_limits": { 00:31:58.684 "rw_ios_per_sec": 0, 00:31:58.685 "rw_mbytes_per_sec": 0, 00:31:58.685 "r_mbytes_per_sec": 0, 00:31:58.685 "w_mbytes_per_sec": 0 00:31:58.685 }, 00:31:58.685 "claimed": false, 00:31:58.685 "zoned": false, 00:31:58.685 "supported_io_types": { 00:31:58.685 "read": true, 00:31:58.685 "write": true, 00:31:58.685 "unmap": false, 00:31:58.685 "flush": false, 00:31:58.685 "reset": true, 00:31:58.685 "nvme_admin": false, 00:31:58.685 "nvme_io": false, 00:31:58.685 "nvme_io_md": false, 00:31:58.685 "write_zeroes": true, 00:31:58.685 "zcopy": false, 00:31:58.685 "get_zone_info": false, 00:31:58.685 "zone_management": false, 00:31:58.685 "zone_append": false, 00:31:58.685 "compare": false, 00:31:58.685 "compare_and_write": false, 00:31:58.685 "abort": false, 00:31:58.685 "seek_hole": false, 00:31:58.685 "seek_data": false, 00:31:58.685 "copy": false, 00:31:58.685 "nvme_iov_md": false 00:31:58.685 }, 00:31:58.685 "memory_domains": [ 00:31:58.685 { 00:31:58.685 "dma_device_id": "system", 00:31:58.685 "dma_device_type": 1 00:31:58.685 }, 00:31:58.685 { 00:31:58.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:58.685 "dma_device_type": 2 00:31:58.685 }, 00:31:58.685 { 00:31:58.685 "dma_device_id": "system", 00:31:58.685 "dma_device_type": 1 00:31:58.685 }, 00:31:58.685 { 00:31:58.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:58.685 "dma_device_type": 2 00:31:58.685 } 00:31:58.685 ], 00:31:58.685 "driver_specific": { 00:31:58.685 "raid": { 00:31:58.685 "uuid": "5b21a766-0fbd-432f-94c8-8fe005ce6b39", 00:31:58.685 "strip_size_kb": 0, 00:31:58.685 "state": "online", 00:31:58.685 "raid_level": "raid1", 00:31:58.685 "superblock": true, 00:31:58.685 "num_base_bdevs": 2, 00:31:58.685 "num_base_bdevs_discovered": 2, 00:31:58.685 "num_base_bdevs_operational": 2, 00:31:58.685 "base_bdevs_list": [ 00:31:58.685 { 00:31:58.685 "name": "pt1", 00:31:58.685 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:58.685 "is_configured": true, 00:31:58.685 "data_offset": 256, 00:31:58.685 "data_size": 7936 00:31:58.685 }, 00:31:58.685 { 00:31:58.685 "name": "pt2", 00:31:58.685 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:58.685 "is_configured": true, 00:31:58.685 "data_offset": 256, 00:31:58.685 "data_size": 7936 00:31:58.685 } 00:31:58.685 ] 00:31:58.685 } 00:31:58.685 } 00:31:58.685 }' 00:31:58.685 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:58.685 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:31:58.685 pt2' 00:31:58.685 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:58.685 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:31:58.685 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:58.685 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:31:58.685 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:58.685 13:52:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.685 13:52:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:58.685 13:52:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.685 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:31:58.685 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:31:58.685 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:58.685 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:31:58.685 13:52:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.685 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:58.685 13:52:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:58.685 13:52:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:58.944 [2024-11-20 13:52:01.624577] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 5b21a766-0fbd-432f-94c8-8fe005ce6b39 '!=' 5b21a766-0fbd-432f-94c8-8fe005ce6b39 ']' 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:58.944 [2024-11-20 13:52:01.676333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:58.944 "name": "raid_bdev1", 00:31:58.944 "uuid": "5b21a766-0fbd-432f-94c8-8fe005ce6b39", 00:31:58.944 "strip_size_kb": 0, 00:31:58.944 "state": "online", 00:31:58.944 "raid_level": "raid1", 00:31:58.944 "superblock": true, 00:31:58.944 "num_base_bdevs": 2, 00:31:58.944 "num_base_bdevs_discovered": 1, 00:31:58.944 "num_base_bdevs_operational": 1, 00:31:58.944 "base_bdevs_list": [ 00:31:58.944 { 00:31:58.944 "name": null, 00:31:58.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:58.944 "is_configured": false, 00:31:58.944 "data_offset": 0, 00:31:58.944 "data_size": 7936 00:31:58.944 }, 00:31:58.944 { 00:31:58.944 "name": "pt2", 00:31:58.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:58.944 "is_configured": true, 00:31:58.944 "data_offset": 256, 00:31:58.944 "data_size": 7936 00:31:58.944 } 00:31:58.944 ] 00:31:58.944 }' 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:58.944 13:52:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:59.511 [2024-11-20 13:52:02.212518] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:59.511 [2024-11-20 13:52:02.212718] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:59.511 [2024-11-20 13:52:02.212842] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:59.511 [2024-11-20 13:52:02.212942] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:59.511 [2024-11-20 13:52:02.212965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.511 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:59.511 [2024-11-20 13:52:02.288486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:59.511 [2024-11-20 13:52:02.288737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:59.511 [2024-11-20 13:52:02.288773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:31:59.511 [2024-11-20 13:52:02.288791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:59.511 [2024-11-20 13:52:02.291737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:59.512 [2024-11-20 13:52:02.291918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:59.512 [2024-11-20 13:52:02.292052] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:59.512 [2024-11-20 13:52:02.292121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:59.512 [2024-11-20 13:52:02.292250] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:31:59.512 [2024-11-20 13:52:02.292272] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:59.512 [2024-11-20 13:52:02.292590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:31:59.512 [2024-11-20 13:52:02.292781] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:31:59.512 [2024-11-20 13:52:02.292796] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:31:59.512 [2024-11-20 13:52:02.293081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:59.512 pt2 00:31:59.512 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.512 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:59.512 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:59.512 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:59.512 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:59.512 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:59.512 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:31:59.512 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:59.512 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:59.512 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:59.512 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:59.512 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:59.512 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.512 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:59.512 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:59.512 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.512 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:59.512 "name": "raid_bdev1", 00:31:59.512 "uuid": "5b21a766-0fbd-432f-94c8-8fe005ce6b39", 00:31:59.512 "strip_size_kb": 0, 00:31:59.512 "state": "online", 00:31:59.512 "raid_level": "raid1", 00:31:59.512 "superblock": true, 00:31:59.512 "num_base_bdevs": 2, 00:31:59.512 "num_base_bdevs_discovered": 1, 00:31:59.512 "num_base_bdevs_operational": 1, 00:31:59.512 "base_bdevs_list": [ 00:31:59.512 { 00:31:59.512 "name": null, 00:31:59.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:59.512 "is_configured": false, 00:31:59.512 "data_offset": 256, 00:31:59.512 "data_size": 7936 00:31:59.512 }, 00:31:59.512 { 00:31:59.512 "name": "pt2", 00:31:59.512 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:59.512 "is_configured": true, 00:31:59.512 "data_offset": 256, 00:31:59.512 "data_size": 7936 00:31:59.512 } 00:31:59.512 ] 00:31:59.512 }' 00:31:59.512 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:59.512 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:00.079 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:00.079 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.079 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:00.079 [2024-11-20 13:52:02.829170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:00.079 [2024-11-20 13:52:02.829206] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:00.079 [2024-11-20 13:52:02.829357] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:00.079 [2024-11-20 13:52:02.829424] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:00.079 [2024-11-20 13:52:02.829438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:32:00.079 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:00.080 [2024-11-20 13:52:02.893211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:00.080 [2024-11-20 13:52:02.893292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:00.080 [2024-11-20 13:52:02.893323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:32:00.080 [2024-11-20 13:52:02.893338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:00.080 [2024-11-20 13:52:02.896394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:00.080 [2024-11-20 13:52:02.896613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:00.080 [2024-11-20 13:52:02.896742] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:00.080 [2024-11-20 13:52:02.896806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:00.080 [2024-11-20 13:52:02.897015] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:32:00.080 [2024-11-20 13:52:02.897045] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:00.080 [2024-11-20 13:52:02.897068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:32:00.080 [2024-11-20 13:52:02.897145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:00.080 [2024-11-20 13:52:02.897280] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:32:00.080 [2024-11-20 13:52:02.897309] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:00.080 [2024-11-20 13:52:02.897669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:32:00.080 [2024-11-20 13:52:02.897928] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:32:00.080 [2024-11-20 13:52:02.897949] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:32:00.080 [2024-11-20 13:52:02.898201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:00.080 pt1 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:00.080 "name": "raid_bdev1", 00:32:00.080 "uuid": "5b21a766-0fbd-432f-94c8-8fe005ce6b39", 00:32:00.080 "strip_size_kb": 0, 00:32:00.080 "state": "online", 00:32:00.080 "raid_level": "raid1", 00:32:00.080 "superblock": true, 00:32:00.080 "num_base_bdevs": 2, 00:32:00.080 "num_base_bdevs_discovered": 1, 00:32:00.080 "num_base_bdevs_operational": 1, 00:32:00.080 "base_bdevs_list": [ 00:32:00.080 { 00:32:00.080 "name": null, 00:32:00.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:00.080 "is_configured": false, 00:32:00.080 "data_offset": 256, 00:32:00.080 "data_size": 7936 00:32:00.080 }, 00:32:00.080 { 00:32:00.080 "name": "pt2", 00:32:00.080 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:00.080 "is_configured": true, 00:32:00.080 "data_offset": 256, 00:32:00.080 "data_size": 7936 00:32:00.080 } 00:32:00.080 ] 00:32:00.080 }' 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:00.080 13:52:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:00.648 13:52:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:32:00.648 13:52:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.648 13:52:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:00.648 13:52:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:32:00.648 13:52:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.648 13:52:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:32:00.648 13:52:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:00.648 13:52:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:32:00.648 13:52:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.648 13:52:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:00.648 [2024-11-20 13:52:03.501850] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:00.648 13:52:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.648 13:52:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 5b21a766-0fbd-432f-94c8-8fe005ce6b39 '!=' 5b21a766-0fbd-432f-94c8-8fe005ce6b39 ']' 00:32:00.648 13:52:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86838 00:32:00.648 13:52:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86838 ']' 00:32:00.648 13:52:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86838 00:32:00.648 13:52:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:32:00.648 13:52:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:00.648 13:52:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86838 00:32:00.906 13:52:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:00.906 13:52:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:00.906 13:52:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86838' 00:32:00.906 killing process with pid 86838 00:32:00.906 13:52:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86838 00:32:00.906 [2024-11-20 13:52:03.582054] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:00.906 13:52:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86838 00:32:00.906 [2024-11-20 13:52:03.582206] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:00.906 [2024-11-20 13:52:03.582336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:00.906 [2024-11-20 13:52:03.582360] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:32:00.906 [2024-11-20 13:52:03.760666] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:02.276 ************************************ 00:32:02.276 END TEST raid_superblock_test_4k 00:32:02.276 ************************************ 00:32:02.276 13:52:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:32:02.276 00:32:02.276 real 0m6.786s 00:32:02.276 user 0m10.770s 00:32:02.276 sys 0m0.996s 00:32:02.276 13:52:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:02.276 13:52:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:02.276 13:52:04 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:32:02.277 13:52:04 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:32:02.277 13:52:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:32:02.277 13:52:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:02.277 13:52:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:02.277 ************************************ 00:32:02.277 START TEST raid_rebuild_test_sb_4k 00:32:02.277 ************************************ 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87172 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87172 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 87172 ']' 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:02.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:02.277 13:52:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:02.277 [2024-11-20 13:52:04.948077] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:32:02.277 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:02.277 Zero copy mechanism will not be used. 00:32:02.277 [2024-11-20 13:52:04.948530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87172 ] 00:32:02.277 [2024-11-20 13:52:05.119278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.535 [2024-11-20 13:52:05.252663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.793 [2024-11-20 13:52:05.450334] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:02.793 [2024-11-20 13:52:05.450395] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:03.051 13:52:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:03.051 13:52:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:32:03.051 13:52:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:03.051 13:52:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:32:03.051 13:52:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.051 13:52:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:03.310 BaseBdev1_malloc 00:32:03.310 13:52:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.310 13:52:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:03.310 13:52:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.310 13:52:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:03.310 [2024-11-20 13:52:06.003120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:03.310 [2024-11-20 13:52:06.003209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:03.310 [2024-11-20 13:52:06.003240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:03.310 [2024-11-20 13:52:06.003272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:03.310 [2024-11-20 13:52:06.006201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:03.310 [2024-11-20 13:52:06.006251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:03.310 BaseBdev1 00:32:03.310 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.310 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:03.310 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:32:03.310 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.310 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:03.310 BaseBdev2_malloc 00:32:03.310 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.310 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:03.310 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.310 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:03.310 [2024-11-20 13:52:06.058360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:03.310 [2024-11-20 13:52:06.058447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:03.310 [2024-11-20 13:52:06.058482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:03.310 [2024-11-20 13:52:06.058498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:03.310 [2024-11-20 13:52:06.061306] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:03.310 [2024-11-20 13:52:06.061350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:03.310 BaseBdev2 00:32:03.310 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.310 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:32:03.310 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.310 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:03.310 spare_malloc 00:32:03.310 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.310 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:03.310 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.310 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:03.310 spare_delay 00:32:03.310 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.310 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:03.310 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.310 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:03.310 [2024-11-20 13:52:06.128063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:03.310 [2024-11-20 13:52:06.128343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:03.311 [2024-11-20 13:52:06.128380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:32:03.311 [2024-11-20 13:52:06.128399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:03.311 [2024-11-20 13:52:06.131319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:03.311 [2024-11-20 13:52:06.131522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:03.311 spare 00:32:03.311 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.311 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:32:03.311 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.311 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:03.311 [2024-11-20 13:52:06.136260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:03.311 [2024-11-20 13:52:06.138671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:03.311 [2024-11-20 13:52:06.139099] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:03.311 [2024-11-20 13:52:06.139145] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:03.311 [2024-11-20 13:52:06.139495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:32:03.311 [2024-11-20 13:52:06.139763] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:03.311 [2024-11-20 13:52:06.139779] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:03.311 [2024-11-20 13:52:06.139994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:03.311 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.311 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:03.311 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:03.311 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:03.311 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:03.311 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:03.311 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:03.311 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:03.311 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:03.311 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:03.311 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:03.311 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:03.311 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.311 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:03.311 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:03.311 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.311 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:03.311 "name": "raid_bdev1", 00:32:03.311 "uuid": "6248259e-3385-46e4-8d77-378e0e50ee21", 00:32:03.311 "strip_size_kb": 0, 00:32:03.311 "state": "online", 00:32:03.311 "raid_level": "raid1", 00:32:03.311 "superblock": true, 00:32:03.311 "num_base_bdevs": 2, 00:32:03.311 "num_base_bdevs_discovered": 2, 00:32:03.311 "num_base_bdevs_operational": 2, 00:32:03.311 "base_bdevs_list": [ 00:32:03.311 { 00:32:03.311 "name": "BaseBdev1", 00:32:03.311 "uuid": "7dd82c98-89c5-5f25-8dd0-8293bf6b35b6", 00:32:03.311 "is_configured": true, 00:32:03.311 "data_offset": 256, 00:32:03.311 "data_size": 7936 00:32:03.311 }, 00:32:03.311 { 00:32:03.311 "name": "BaseBdev2", 00:32:03.311 "uuid": "9a310089-3abd-5270-b85f-e4d582ad25da", 00:32:03.311 "is_configured": true, 00:32:03.311 "data_offset": 256, 00:32:03.311 "data_size": 7936 00:32:03.311 } 00:32:03.311 ] 00:32:03.311 }' 00:32:03.311 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:03.311 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:03.879 [2024-11-20 13:52:06.673129] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:03.879 13:52:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:04.138 [2024-11-20 13:52:07.036922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:32:04.138 /dev/nbd0 00:32:04.396 13:52:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:04.396 13:52:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:04.396 13:52:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:32:04.396 13:52:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:32:04.396 13:52:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:04.396 13:52:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:04.396 13:52:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:32:04.396 13:52:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:32:04.396 13:52:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:04.396 13:52:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:04.396 13:52:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:04.396 1+0 records in 00:32:04.396 1+0 records out 00:32:04.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00058058 s, 7.1 MB/s 00:32:04.396 13:52:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:04.396 13:52:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:32:04.396 13:52:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:04.396 13:52:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:04.396 13:52:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:32:04.396 13:52:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:04.396 13:52:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:04.396 13:52:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:32:04.396 13:52:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:32:04.396 13:52:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:32:05.330 7936+0 records in 00:32:05.330 7936+0 records out 00:32:05.330 32505856 bytes (33 MB, 31 MiB) copied, 0.902964 s, 36.0 MB/s 00:32:05.330 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:32:05.330 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:05.330 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:05.330 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:05.330 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:32:05.330 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:05.330 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:05.588 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:05.588 [2024-11-20 13:52:08.302343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:05.588 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:05.588 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:05.588 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:05.588 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:05.588 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:05.589 [2024-11-20 13:52:08.314448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:05.589 "name": "raid_bdev1", 00:32:05.589 "uuid": "6248259e-3385-46e4-8d77-378e0e50ee21", 00:32:05.589 "strip_size_kb": 0, 00:32:05.589 "state": "online", 00:32:05.589 "raid_level": "raid1", 00:32:05.589 "superblock": true, 00:32:05.589 "num_base_bdevs": 2, 00:32:05.589 "num_base_bdevs_discovered": 1, 00:32:05.589 "num_base_bdevs_operational": 1, 00:32:05.589 "base_bdevs_list": [ 00:32:05.589 { 00:32:05.589 "name": null, 00:32:05.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:05.589 "is_configured": false, 00:32:05.589 "data_offset": 0, 00:32:05.589 "data_size": 7936 00:32:05.589 }, 00:32:05.589 { 00:32:05.589 "name": "BaseBdev2", 00:32:05.589 "uuid": "9a310089-3abd-5270-b85f-e4d582ad25da", 00:32:05.589 "is_configured": true, 00:32:05.589 "data_offset": 256, 00:32:05.589 "data_size": 7936 00:32:05.589 } 00:32:05.589 ] 00:32:05.589 }' 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:05.589 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:06.157 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:06.157 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.157 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:06.157 [2024-11-20 13:52:08.822679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:06.157 [2024-11-20 13:52:08.840390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:32:06.157 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.157 13:52:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:32:06.157 [2024-11-20 13:52:08.843014] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:07.089 13:52:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:07.089 13:52:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:07.089 13:52:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:07.089 13:52:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:07.089 13:52:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:07.089 13:52:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:07.089 13:52:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:07.089 13:52:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.089 13:52:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:07.089 13:52:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.089 13:52:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:07.089 "name": "raid_bdev1", 00:32:07.089 "uuid": "6248259e-3385-46e4-8d77-378e0e50ee21", 00:32:07.089 "strip_size_kb": 0, 00:32:07.089 "state": "online", 00:32:07.089 "raid_level": "raid1", 00:32:07.089 "superblock": true, 00:32:07.089 "num_base_bdevs": 2, 00:32:07.089 "num_base_bdevs_discovered": 2, 00:32:07.089 "num_base_bdevs_operational": 2, 00:32:07.089 "process": { 00:32:07.089 "type": "rebuild", 00:32:07.089 "target": "spare", 00:32:07.089 "progress": { 00:32:07.089 "blocks": 2560, 00:32:07.089 "percent": 32 00:32:07.089 } 00:32:07.089 }, 00:32:07.089 "base_bdevs_list": [ 00:32:07.089 { 00:32:07.089 "name": "spare", 00:32:07.089 "uuid": "7a04afff-297e-5290-be75-55c65a0883d5", 00:32:07.089 "is_configured": true, 00:32:07.089 "data_offset": 256, 00:32:07.089 "data_size": 7936 00:32:07.089 }, 00:32:07.089 { 00:32:07.089 "name": "BaseBdev2", 00:32:07.089 "uuid": "9a310089-3abd-5270-b85f-e4d582ad25da", 00:32:07.089 "is_configured": true, 00:32:07.089 "data_offset": 256, 00:32:07.089 "data_size": 7936 00:32:07.089 } 00:32:07.089 ] 00:32:07.089 }' 00:32:07.089 13:52:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:07.089 13:52:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:07.089 13:52:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:07.348 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:07.348 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:32:07.348 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.348 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:07.348 [2024-11-20 13:52:10.012415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:07.348 [2024-11-20 13:52:10.052489] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:07.348 [2024-11-20 13:52:10.052574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:07.348 [2024-11-20 13:52:10.052597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:07.348 [2024-11-20 13:52:10.052612] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:07.348 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.348 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:07.348 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:07.348 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:07.348 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:07.348 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:07.348 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:07.348 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:07.348 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:07.348 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:07.348 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:07.348 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:07.348 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.348 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:07.348 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:07.348 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.348 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:07.348 "name": "raid_bdev1", 00:32:07.348 "uuid": "6248259e-3385-46e4-8d77-378e0e50ee21", 00:32:07.348 "strip_size_kb": 0, 00:32:07.348 "state": "online", 00:32:07.348 "raid_level": "raid1", 00:32:07.348 "superblock": true, 00:32:07.348 "num_base_bdevs": 2, 00:32:07.348 "num_base_bdevs_discovered": 1, 00:32:07.348 "num_base_bdevs_operational": 1, 00:32:07.348 "base_bdevs_list": [ 00:32:07.348 { 00:32:07.348 "name": null, 00:32:07.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:07.348 "is_configured": false, 00:32:07.348 "data_offset": 0, 00:32:07.348 "data_size": 7936 00:32:07.348 }, 00:32:07.348 { 00:32:07.348 "name": "BaseBdev2", 00:32:07.348 "uuid": "9a310089-3abd-5270-b85f-e4d582ad25da", 00:32:07.348 "is_configured": true, 00:32:07.348 "data_offset": 256, 00:32:07.348 "data_size": 7936 00:32:07.348 } 00:32:07.348 ] 00:32:07.348 }' 00:32:07.348 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:07.348 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:07.914 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:07.914 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:07.914 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:07.915 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:07.915 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:07.915 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:07.915 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:07.915 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.915 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:07.915 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.915 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:07.915 "name": "raid_bdev1", 00:32:07.915 "uuid": "6248259e-3385-46e4-8d77-378e0e50ee21", 00:32:07.915 "strip_size_kb": 0, 00:32:07.915 "state": "online", 00:32:07.915 "raid_level": "raid1", 00:32:07.915 "superblock": true, 00:32:07.915 "num_base_bdevs": 2, 00:32:07.915 "num_base_bdevs_discovered": 1, 00:32:07.915 "num_base_bdevs_operational": 1, 00:32:07.915 "base_bdevs_list": [ 00:32:07.915 { 00:32:07.915 "name": null, 00:32:07.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:07.915 "is_configured": false, 00:32:07.915 "data_offset": 0, 00:32:07.915 "data_size": 7936 00:32:07.915 }, 00:32:07.915 { 00:32:07.915 "name": "BaseBdev2", 00:32:07.915 "uuid": "9a310089-3abd-5270-b85f-e4d582ad25da", 00:32:07.915 "is_configured": true, 00:32:07.915 "data_offset": 256, 00:32:07.915 "data_size": 7936 00:32:07.915 } 00:32:07.915 ] 00:32:07.915 }' 00:32:07.915 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:07.915 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:07.915 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:07.915 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:07.915 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:07.915 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.915 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:07.915 [2024-11-20 13:52:10.798469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:07.915 [2024-11-20 13:52:10.815014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:32:07.915 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.915 13:52:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:32:07.915 [2024-11-20 13:52:10.817862] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:09.291 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:09.291 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:09.291 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:09.291 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:09.291 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:09.291 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:09.291 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:09.291 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.291 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:09.291 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.291 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:09.291 "name": "raid_bdev1", 00:32:09.291 "uuid": "6248259e-3385-46e4-8d77-378e0e50ee21", 00:32:09.291 "strip_size_kb": 0, 00:32:09.291 "state": "online", 00:32:09.291 "raid_level": "raid1", 00:32:09.291 "superblock": true, 00:32:09.291 "num_base_bdevs": 2, 00:32:09.291 "num_base_bdevs_discovered": 2, 00:32:09.291 "num_base_bdevs_operational": 2, 00:32:09.291 "process": { 00:32:09.291 "type": "rebuild", 00:32:09.291 "target": "spare", 00:32:09.291 "progress": { 00:32:09.291 "blocks": 2560, 00:32:09.291 "percent": 32 00:32:09.291 } 00:32:09.291 }, 00:32:09.291 "base_bdevs_list": [ 00:32:09.291 { 00:32:09.291 "name": "spare", 00:32:09.291 "uuid": "7a04afff-297e-5290-be75-55c65a0883d5", 00:32:09.291 "is_configured": true, 00:32:09.291 "data_offset": 256, 00:32:09.291 "data_size": 7936 00:32:09.291 }, 00:32:09.292 { 00:32:09.292 "name": "BaseBdev2", 00:32:09.292 "uuid": "9a310089-3abd-5270-b85f-e4d582ad25da", 00:32:09.292 "is_configured": true, 00:32:09.292 "data_offset": 256, 00:32:09.292 "data_size": 7936 00:32:09.292 } 00:32:09.292 ] 00:32:09.292 }' 00:32:09.292 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:09.292 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:09.292 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:09.292 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:09.292 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:32:09.292 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:32:09.292 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:32:09.292 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:32:09.292 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:32:09.292 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:32:09.292 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=742 00:32:09.292 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:09.292 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:09.292 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:09.292 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:09.292 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:09.292 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:09.292 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:09.292 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:09.292 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.292 13:52:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:09.292 13:52:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.292 13:52:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:09.292 "name": "raid_bdev1", 00:32:09.292 "uuid": "6248259e-3385-46e4-8d77-378e0e50ee21", 00:32:09.292 "strip_size_kb": 0, 00:32:09.292 "state": "online", 00:32:09.292 "raid_level": "raid1", 00:32:09.292 "superblock": true, 00:32:09.292 "num_base_bdevs": 2, 00:32:09.292 "num_base_bdevs_discovered": 2, 00:32:09.292 "num_base_bdevs_operational": 2, 00:32:09.292 "process": { 00:32:09.292 "type": "rebuild", 00:32:09.292 "target": "spare", 00:32:09.292 "progress": { 00:32:09.292 "blocks": 2816, 00:32:09.292 "percent": 35 00:32:09.292 } 00:32:09.292 }, 00:32:09.292 "base_bdevs_list": [ 00:32:09.292 { 00:32:09.292 "name": "spare", 00:32:09.292 "uuid": "7a04afff-297e-5290-be75-55c65a0883d5", 00:32:09.292 "is_configured": true, 00:32:09.292 "data_offset": 256, 00:32:09.292 "data_size": 7936 00:32:09.292 }, 00:32:09.292 { 00:32:09.292 "name": "BaseBdev2", 00:32:09.292 "uuid": "9a310089-3abd-5270-b85f-e4d582ad25da", 00:32:09.292 "is_configured": true, 00:32:09.292 "data_offset": 256, 00:32:09.292 "data_size": 7936 00:32:09.292 } 00:32:09.292 ] 00:32:09.292 }' 00:32:09.292 13:52:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:09.292 13:52:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:09.292 13:52:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:09.292 13:52:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:09.292 13:52:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:10.669 13:52:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:10.669 13:52:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:10.669 13:52:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:10.669 13:52:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:10.669 13:52:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:10.669 13:52:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:10.670 13:52:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:10.670 13:52:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.670 13:52:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:10.670 13:52:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:10.670 13:52:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.670 13:52:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:10.670 "name": "raid_bdev1", 00:32:10.670 "uuid": "6248259e-3385-46e4-8d77-378e0e50ee21", 00:32:10.670 "strip_size_kb": 0, 00:32:10.670 "state": "online", 00:32:10.670 "raid_level": "raid1", 00:32:10.670 "superblock": true, 00:32:10.670 "num_base_bdevs": 2, 00:32:10.670 "num_base_bdevs_discovered": 2, 00:32:10.670 "num_base_bdevs_operational": 2, 00:32:10.670 "process": { 00:32:10.670 "type": "rebuild", 00:32:10.670 "target": "spare", 00:32:10.670 "progress": { 00:32:10.670 "blocks": 5888, 00:32:10.670 "percent": 74 00:32:10.670 } 00:32:10.670 }, 00:32:10.670 "base_bdevs_list": [ 00:32:10.670 { 00:32:10.670 "name": "spare", 00:32:10.670 "uuid": "7a04afff-297e-5290-be75-55c65a0883d5", 00:32:10.670 "is_configured": true, 00:32:10.670 "data_offset": 256, 00:32:10.670 "data_size": 7936 00:32:10.670 }, 00:32:10.670 { 00:32:10.670 "name": "BaseBdev2", 00:32:10.670 "uuid": "9a310089-3abd-5270-b85f-e4d582ad25da", 00:32:10.670 "is_configured": true, 00:32:10.670 "data_offset": 256, 00:32:10.670 "data_size": 7936 00:32:10.670 } 00:32:10.670 ] 00:32:10.670 }' 00:32:10.670 13:52:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:10.670 13:52:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:10.670 13:52:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:10.670 13:52:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:10.670 13:52:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:11.237 [2024-11-20 13:52:13.940314] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:11.237 [2024-11-20 13:52:13.940613] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:11.237 [2024-11-20 13:52:13.940805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:11.495 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:11.495 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:11.495 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:11.495 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:11.495 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:11.495 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:11.495 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:11.495 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:11.495 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.495 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:11.495 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.495 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:11.495 "name": "raid_bdev1", 00:32:11.495 "uuid": "6248259e-3385-46e4-8d77-378e0e50ee21", 00:32:11.495 "strip_size_kb": 0, 00:32:11.495 "state": "online", 00:32:11.495 "raid_level": "raid1", 00:32:11.495 "superblock": true, 00:32:11.495 "num_base_bdevs": 2, 00:32:11.495 "num_base_bdevs_discovered": 2, 00:32:11.496 "num_base_bdevs_operational": 2, 00:32:11.496 "base_bdevs_list": [ 00:32:11.496 { 00:32:11.496 "name": "spare", 00:32:11.496 "uuid": "7a04afff-297e-5290-be75-55c65a0883d5", 00:32:11.496 "is_configured": true, 00:32:11.496 "data_offset": 256, 00:32:11.496 "data_size": 7936 00:32:11.496 }, 00:32:11.496 { 00:32:11.496 "name": "BaseBdev2", 00:32:11.496 "uuid": "9a310089-3abd-5270-b85f-e4d582ad25da", 00:32:11.496 "is_configured": true, 00:32:11.496 "data_offset": 256, 00:32:11.496 "data_size": 7936 00:32:11.496 } 00:32:11.496 ] 00:32:11.496 }' 00:32:11.496 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:11.755 "name": "raid_bdev1", 00:32:11.755 "uuid": "6248259e-3385-46e4-8d77-378e0e50ee21", 00:32:11.755 "strip_size_kb": 0, 00:32:11.755 "state": "online", 00:32:11.755 "raid_level": "raid1", 00:32:11.755 "superblock": true, 00:32:11.755 "num_base_bdevs": 2, 00:32:11.755 "num_base_bdevs_discovered": 2, 00:32:11.755 "num_base_bdevs_operational": 2, 00:32:11.755 "base_bdevs_list": [ 00:32:11.755 { 00:32:11.755 "name": "spare", 00:32:11.755 "uuid": "7a04afff-297e-5290-be75-55c65a0883d5", 00:32:11.755 "is_configured": true, 00:32:11.755 "data_offset": 256, 00:32:11.755 "data_size": 7936 00:32:11.755 }, 00:32:11.755 { 00:32:11.755 "name": "BaseBdev2", 00:32:11.755 "uuid": "9a310089-3abd-5270-b85f-e4d582ad25da", 00:32:11.755 "is_configured": true, 00:32:11.755 "data_offset": 256, 00:32:11.755 "data_size": 7936 00:32:11.755 } 00:32:11.755 ] 00:32:11.755 }' 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:11.755 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.015 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:12.015 "name": "raid_bdev1", 00:32:12.015 "uuid": "6248259e-3385-46e4-8d77-378e0e50ee21", 00:32:12.015 "strip_size_kb": 0, 00:32:12.015 "state": "online", 00:32:12.015 "raid_level": "raid1", 00:32:12.015 "superblock": true, 00:32:12.015 "num_base_bdevs": 2, 00:32:12.015 "num_base_bdevs_discovered": 2, 00:32:12.015 "num_base_bdevs_operational": 2, 00:32:12.015 "base_bdevs_list": [ 00:32:12.015 { 00:32:12.015 "name": "spare", 00:32:12.015 "uuid": "7a04afff-297e-5290-be75-55c65a0883d5", 00:32:12.015 "is_configured": true, 00:32:12.015 "data_offset": 256, 00:32:12.015 "data_size": 7936 00:32:12.015 }, 00:32:12.015 { 00:32:12.015 "name": "BaseBdev2", 00:32:12.015 "uuid": "9a310089-3abd-5270-b85f-e4d582ad25da", 00:32:12.015 "is_configured": true, 00:32:12.015 "data_offset": 256, 00:32:12.015 "data_size": 7936 00:32:12.015 } 00:32:12.015 ] 00:32:12.015 }' 00:32:12.015 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:12.015 13:52:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:12.288 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:12.288 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.288 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:12.288 [2024-11-20 13:52:15.171753] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:12.288 [2024-11-20 13:52:15.171801] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:12.288 [2024-11-20 13:52:15.171895] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:12.288 [2024-11-20 13:52:15.172012] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:12.288 [2024-11-20 13:52:15.172034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:12.288 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.288 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:12.288 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:32:12.288 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.288 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:12.288 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.612 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:32:12.612 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:32:12.612 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:32:12.612 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:12.612 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:12.612 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:32:12.612 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:12.612 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:12.612 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:12.612 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:32:12.612 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:12.612 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:12.612 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:12.871 /dev/nbd0 00:32:12.871 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:12.871 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:12.871 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:32:12.871 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:32:12.871 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:12.871 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:12.871 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:32:12.871 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:32:12.871 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:12.871 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:12.871 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:12.871 1+0 records in 00:32:12.871 1+0 records out 00:32:12.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589153 s, 7.0 MB/s 00:32:12.871 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:12.871 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:32:12.871 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:12.871 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:12.871 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:32:12.871 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:12.871 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:12.871 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:32:13.131 /dev/nbd1 00:32:13.131 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:13.131 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:13.131 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:32:13.131 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:32:13.131 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:13.131 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:13.131 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:32:13.131 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:32:13.131 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:13.131 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:13.131 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:13.131 1+0 records in 00:32:13.131 1+0 records out 00:32:13.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334774 s, 12.2 MB/s 00:32:13.131 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:13.131 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:32:13.131 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:13.131 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:13.131 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:32:13.131 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:13.131 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:13.131 13:52:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:32:13.131 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:32:13.131 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:13.131 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:13.131 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:13.131 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:32:13.131 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:13.131 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:13.389 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:13.390 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:13.390 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:13.390 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:13.390 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:13.390 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:13.390 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:32:13.390 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:32:13.390 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:13.390 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:32:13.649 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:13.649 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:13.649 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:13.649 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:13.649 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:13.649 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:13.649 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:32:13.649 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:32:13.649 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:32:13.649 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:32:13.649 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.649 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:13.908 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.908 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:13.908 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.908 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:13.908 [2024-11-20 13:52:16.567498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:13.908 [2024-11-20 13:52:16.567573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:13.908 [2024-11-20 13:52:16.567609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:13.908 [2024-11-20 13:52:16.567623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:13.908 [2024-11-20 13:52:16.570726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:13.908 [2024-11-20 13:52:16.570769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:13.908 [2024-11-20 13:52:16.570916] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:13.908 [2024-11-20 13:52:16.571035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:13.908 [2024-11-20 13:52:16.571250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:13.908 spare 00:32:13.908 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.908 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:32:13.908 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.908 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:13.908 [2024-11-20 13:52:16.671387] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:32:13.908 [2024-11-20 13:52:16.671433] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:13.908 [2024-11-20 13:52:16.671823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:32:13.908 [2024-11-20 13:52:16.672160] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:32:13.908 [2024-11-20 13:52:16.672190] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:32:13.908 [2024-11-20 13:52:16.672497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:13.908 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.908 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:13.908 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:13.908 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:13.908 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:13.908 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:13.908 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:13.908 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:13.908 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:13.908 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:13.908 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:13.908 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:13.908 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:13.908 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.908 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:13.909 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.909 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:13.909 "name": "raid_bdev1", 00:32:13.909 "uuid": "6248259e-3385-46e4-8d77-378e0e50ee21", 00:32:13.909 "strip_size_kb": 0, 00:32:13.909 "state": "online", 00:32:13.909 "raid_level": "raid1", 00:32:13.909 "superblock": true, 00:32:13.909 "num_base_bdevs": 2, 00:32:13.909 "num_base_bdevs_discovered": 2, 00:32:13.909 "num_base_bdevs_operational": 2, 00:32:13.909 "base_bdevs_list": [ 00:32:13.909 { 00:32:13.909 "name": "spare", 00:32:13.909 "uuid": "7a04afff-297e-5290-be75-55c65a0883d5", 00:32:13.909 "is_configured": true, 00:32:13.909 "data_offset": 256, 00:32:13.909 "data_size": 7936 00:32:13.909 }, 00:32:13.909 { 00:32:13.909 "name": "BaseBdev2", 00:32:13.909 "uuid": "9a310089-3abd-5270-b85f-e4d582ad25da", 00:32:13.909 "is_configured": true, 00:32:13.909 "data_offset": 256, 00:32:13.909 "data_size": 7936 00:32:13.909 } 00:32:13.909 ] 00:32:13.909 }' 00:32:13.909 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:13.909 13:52:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:14.476 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:14.476 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:14.476 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:14.476 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:14.476 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:14.476 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:14.476 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:14.476 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.476 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:14.476 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.476 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:14.476 "name": "raid_bdev1", 00:32:14.476 "uuid": "6248259e-3385-46e4-8d77-378e0e50ee21", 00:32:14.476 "strip_size_kb": 0, 00:32:14.476 "state": "online", 00:32:14.476 "raid_level": "raid1", 00:32:14.476 "superblock": true, 00:32:14.476 "num_base_bdevs": 2, 00:32:14.476 "num_base_bdevs_discovered": 2, 00:32:14.476 "num_base_bdevs_operational": 2, 00:32:14.476 "base_bdevs_list": [ 00:32:14.476 { 00:32:14.476 "name": "spare", 00:32:14.476 "uuid": "7a04afff-297e-5290-be75-55c65a0883d5", 00:32:14.476 "is_configured": true, 00:32:14.476 "data_offset": 256, 00:32:14.476 "data_size": 7936 00:32:14.476 }, 00:32:14.476 { 00:32:14.476 "name": "BaseBdev2", 00:32:14.476 "uuid": "9a310089-3abd-5270-b85f-e4d582ad25da", 00:32:14.476 "is_configured": true, 00:32:14.476 "data_offset": 256, 00:32:14.476 "data_size": 7936 00:32:14.476 } 00:32:14.476 ] 00:32:14.476 }' 00:32:14.476 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:14.476 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:14.476 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:14.476 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:14.476 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:14.476 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.476 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:14.476 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:32:14.476 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.734 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:32:14.734 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:32:14.734 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.734 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:14.734 [2024-11-20 13:52:17.408955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:14.734 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.734 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:14.734 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:14.734 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:14.734 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:14.734 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:14.734 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:14.734 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:14.734 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:14.734 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:14.734 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:14.734 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:14.734 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.734 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:14.734 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:14.734 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.734 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:14.734 "name": "raid_bdev1", 00:32:14.734 "uuid": "6248259e-3385-46e4-8d77-378e0e50ee21", 00:32:14.734 "strip_size_kb": 0, 00:32:14.734 "state": "online", 00:32:14.734 "raid_level": "raid1", 00:32:14.734 "superblock": true, 00:32:14.734 "num_base_bdevs": 2, 00:32:14.734 "num_base_bdevs_discovered": 1, 00:32:14.734 "num_base_bdevs_operational": 1, 00:32:14.734 "base_bdevs_list": [ 00:32:14.734 { 00:32:14.734 "name": null, 00:32:14.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:14.734 "is_configured": false, 00:32:14.734 "data_offset": 0, 00:32:14.734 "data_size": 7936 00:32:14.734 }, 00:32:14.734 { 00:32:14.734 "name": "BaseBdev2", 00:32:14.734 "uuid": "9a310089-3abd-5270-b85f-e4d582ad25da", 00:32:14.734 "is_configured": true, 00:32:14.734 "data_offset": 256, 00:32:14.734 "data_size": 7936 00:32:14.734 } 00:32:14.734 ] 00:32:14.734 }' 00:32:14.734 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:14.734 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:15.301 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:15.301 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.301 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:15.301 [2024-11-20 13:52:17.917120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:15.301 [2024-11-20 13:52:17.917564] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:15.301 [2024-11-20 13:52:17.917595] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:15.301 [2024-11-20 13:52:17.917664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:15.301 [2024-11-20 13:52:17.934538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:32:15.301 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.301 13:52:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:32:15.301 [2024-11-20 13:52:17.937323] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:16.234 13:52:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:16.234 13:52:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:16.234 13:52:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:16.234 13:52:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:16.234 13:52:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:16.234 13:52:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:16.234 13:52:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.234 13:52:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:16.234 13:52:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:16.234 13:52:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.234 13:52:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:16.234 "name": "raid_bdev1", 00:32:16.234 "uuid": "6248259e-3385-46e4-8d77-378e0e50ee21", 00:32:16.234 "strip_size_kb": 0, 00:32:16.234 "state": "online", 00:32:16.234 "raid_level": "raid1", 00:32:16.234 "superblock": true, 00:32:16.234 "num_base_bdevs": 2, 00:32:16.234 "num_base_bdevs_discovered": 2, 00:32:16.234 "num_base_bdevs_operational": 2, 00:32:16.234 "process": { 00:32:16.234 "type": "rebuild", 00:32:16.234 "target": "spare", 00:32:16.234 "progress": { 00:32:16.234 "blocks": 2560, 00:32:16.234 "percent": 32 00:32:16.234 } 00:32:16.234 }, 00:32:16.234 "base_bdevs_list": [ 00:32:16.234 { 00:32:16.234 "name": "spare", 00:32:16.234 "uuid": "7a04afff-297e-5290-be75-55c65a0883d5", 00:32:16.234 "is_configured": true, 00:32:16.234 "data_offset": 256, 00:32:16.234 "data_size": 7936 00:32:16.234 }, 00:32:16.235 { 00:32:16.235 "name": "BaseBdev2", 00:32:16.235 "uuid": "9a310089-3abd-5270-b85f-e4d582ad25da", 00:32:16.235 "is_configured": true, 00:32:16.235 "data_offset": 256, 00:32:16.235 "data_size": 7936 00:32:16.235 } 00:32:16.235 ] 00:32:16.235 }' 00:32:16.235 13:52:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:16.235 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:16.235 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:16.235 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:16.235 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:32:16.235 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.235 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:16.235 [2024-11-20 13:52:19.102788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:16.235 [2024-11-20 13:52:19.146020] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:16.235 [2024-11-20 13:52:19.146122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:16.235 [2024-11-20 13:52:19.146144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:16.235 [2024-11-20 13:52:19.146157] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:16.493 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.493 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:16.493 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:16.493 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:16.493 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:16.493 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:16.493 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:16.493 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:16.493 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:16.493 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:16.493 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:16.493 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:16.493 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.493 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:16.493 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:16.493 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.493 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:16.493 "name": "raid_bdev1", 00:32:16.493 "uuid": "6248259e-3385-46e4-8d77-378e0e50ee21", 00:32:16.493 "strip_size_kb": 0, 00:32:16.493 "state": "online", 00:32:16.493 "raid_level": "raid1", 00:32:16.493 "superblock": true, 00:32:16.493 "num_base_bdevs": 2, 00:32:16.493 "num_base_bdevs_discovered": 1, 00:32:16.493 "num_base_bdevs_operational": 1, 00:32:16.493 "base_bdevs_list": [ 00:32:16.493 { 00:32:16.493 "name": null, 00:32:16.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:16.493 "is_configured": false, 00:32:16.493 "data_offset": 0, 00:32:16.493 "data_size": 7936 00:32:16.493 }, 00:32:16.493 { 00:32:16.493 "name": "BaseBdev2", 00:32:16.493 "uuid": "9a310089-3abd-5270-b85f-e4d582ad25da", 00:32:16.493 "is_configured": true, 00:32:16.493 "data_offset": 256, 00:32:16.493 "data_size": 7936 00:32:16.493 } 00:32:16.493 ] 00:32:16.493 }' 00:32:16.493 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:16.493 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:16.752 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:16.752 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.752 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:16.752 [2024-11-20 13:52:19.651097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:16.752 [2024-11-20 13:52:19.651357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:16.752 [2024-11-20 13:52:19.651399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:32:16.752 [2024-11-20 13:52:19.651418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:16.752 [2024-11-20 13:52:19.652089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:16.752 [2024-11-20 13:52:19.652131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:16.752 [2024-11-20 13:52:19.652250] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:16.752 [2024-11-20 13:52:19.652402] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:16.752 [2024-11-20 13:52:19.652434] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:16.752 [2024-11-20 13:52:19.652486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:17.010 [2024-11-20 13:52:19.668654] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:32:17.010 spare 00:32:17.010 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.010 13:52:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:32:17.010 [2024-11-20 13:52:19.671382] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:18.007 "name": "raid_bdev1", 00:32:18.007 "uuid": "6248259e-3385-46e4-8d77-378e0e50ee21", 00:32:18.007 "strip_size_kb": 0, 00:32:18.007 "state": "online", 00:32:18.007 "raid_level": "raid1", 00:32:18.007 "superblock": true, 00:32:18.007 "num_base_bdevs": 2, 00:32:18.007 "num_base_bdevs_discovered": 2, 00:32:18.007 "num_base_bdevs_operational": 2, 00:32:18.007 "process": { 00:32:18.007 "type": "rebuild", 00:32:18.007 "target": "spare", 00:32:18.007 "progress": { 00:32:18.007 "blocks": 2560, 00:32:18.007 "percent": 32 00:32:18.007 } 00:32:18.007 }, 00:32:18.007 "base_bdevs_list": [ 00:32:18.007 { 00:32:18.007 "name": "spare", 00:32:18.007 "uuid": "7a04afff-297e-5290-be75-55c65a0883d5", 00:32:18.007 "is_configured": true, 00:32:18.007 "data_offset": 256, 00:32:18.007 "data_size": 7936 00:32:18.007 }, 00:32:18.007 { 00:32:18.007 "name": "BaseBdev2", 00:32:18.007 "uuid": "9a310089-3abd-5270-b85f-e4d582ad25da", 00:32:18.007 "is_configured": true, 00:32:18.007 "data_offset": 256, 00:32:18.007 "data_size": 7936 00:32:18.007 } 00:32:18.007 ] 00:32:18.007 }' 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:18.007 [2024-11-20 13:52:20.828701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:18.007 [2024-11-20 13:52:20.879814] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:18.007 [2024-11-20 13:52:20.879889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:18.007 [2024-11-20 13:52:20.879937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:18.007 [2024-11-20 13:52:20.879949] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.007 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:18.266 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.266 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:18.266 "name": "raid_bdev1", 00:32:18.266 "uuid": "6248259e-3385-46e4-8d77-378e0e50ee21", 00:32:18.266 "strip_size_kb": 0, 00:32:18.266 "state": "online", 00:32:18.266 "raid_level": "raid1", 00:32:18.266 "superblock": true, 00:32:18.266 "num_base_bdevs": 2, 00:32:18.266 "num_base_bdevs_discovered": 1, 00:32:18.266 "num_base_bdevs_operational": 1, 00:32:18.266 "base_bdevs_list": [ 00:32:18.266 { 00:32:18.266 "name": null, 00:32:18.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:18.266 "is_configured": false, 00:32:18.266 "data_offset": 0, 00:32:18.266 "data_size": 7936 00:32:18.266 }, 00:32:18.266 { 00:32:18.266 "name": "BaseBdev2", 00:32:18.266 "uuid": "9a310089-3abd-5270-b85f-e4d582ad25da", 00:32:18.266 "is_configured": true, 00:32:18.266 "data_offset": 256, 00:32:18.266 "data_size": 7936 00:32:18.266 } 00:32:18.266 ] 00:32:18.266 }' 00:32:18.266 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:18.266 13:52:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:18.524 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:18.524 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:18.524 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:18.524 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:18.524 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:18.524 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:18.524 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:18.524 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.524 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:18.783 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.783 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:18.783 "name": "raid_bdev1", 00:32:18.783 "uuid": "6248259e-3385-46e4-8d77-378e0e50ee21", 00:32:18.783 "strip_size_kb": 0, 00:32:18.783 "state": "online", 00:32:18.783 "raid_level": "raid1", 00:32:18.783 "superblock": true, 00:32:18.783 "num_base_bdevs": 2, 00:32:18.783 "num_base_bdevs_discovered": 1, 00:32:18.783 "num_base_bdevs_operational": 1, 00:32:18.783 "base_bdevs_list": [ 00:32:18.783 { 00:32:18.783 "name": null, 00:32:18.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:18.783 "is_configured": false, 00:32:18.783 "data_offset": 0, 00:32:18.783 "data_size": 7936 00:32:18.783 }, 00:32:18.783 { 00:32:18.783 "name": "BaseBdev2", 00:32:18.783 "uuid": "9a310089-3abd-5270-b85f-e4d582ad25da", 00:32:18.783 "is_configured": true, 00:32:18.783 "data_offset": 256, 00:32:18.783 "data_size": 7936 00:32:18.783 } 00:32:18.783 ] 00:32:18.783 }' 00:32:18.783 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:18.783 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:18.783 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:18.783 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:18.783 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:32:18.783 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.783 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:18.783 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.783 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:18.783 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.783 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:18.783 [2024-11-20 13:52:21.590885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:18.783 [2024-11-20 13:52:21.590968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:18.783 [2024-11-20 13:52:21.591010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:32:18.783 [2024-11-20 13:52:21.591038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:18.783 [2024-11-20 13:52:21.591587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:18.783 [2024-11-20 13:52:21.591645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:18.783 [2024-11-20 13:52:21.591773] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:18.783 [2024-11-20 13:52:21.591795] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:18.783 [2024-11-20 13:52:21.591811] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:18.783 [2024-11-20 13:52:21.591824] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:32:18.783 BaseBdev1 00:32:18.783 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.783 13:52:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:32:19.718 13:52:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:19.718 13:52:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:19.718 13:52:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:19.718 13:52:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:19.718 13:52:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:19.718 13:52:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:19.718 13:52:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:19.718 13:52:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:19.718 13:52:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:19.718 13:52:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:19.718 13:52:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:19.718 13:52:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:19.718 13:52:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.718 13:52:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:19.718 13:52:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:19.976 13:52:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:19.976 "name": "raid_bdev1", 00:32:19.976 "uuid": "6248259e-3385-46e4-8d77-378e0e50ee21", 00:32:19.976 "strip_size_kb": 0, 00:32:19.976 "state": "online", 00:32:19.976 "raid_level": "raid1", 00:32:19.976 "superblock": true, 00:32:19.976 "num_base_bdevs": 2, 00:32:19.976 "num_base_bdevs_discovered": 1, 00:32:19.976 "num_base_bdevs_operational": 1, 00:32:19.976 "base_bdevs_list": [ 00:32:19.976 { 00:32:19.976 "name": null, 00:32:19.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:19.976 "is_configured": false, 00:32:19.976 "data_offset": 0, 00:32:19.976 "data_size": 7936 00:32:19.976 }, 00:32:19.976 { 00:32:19.976 "name": "BaseBdev2", 00:32:19.976 "uuid": "9a310089-3abd-5270-b85f-e4d582ad25da", 00:32:19.976 "is_configured": true, 00:32:19.976 "data_offset": 256, 00:32:19.976 "data_size": 7936 00:32:19.976 } 00:32:19.976 ] 00:32:19.976 }' 00:32:19.976 13:52:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:19.976 13:52:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:20.235 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:20.235 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:20.235 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:20.235 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:20.235 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:20.235 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:20.235 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.235 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:20.235 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:20.235 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.494 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:20.494 "name": "raid_bdev1", 00:32:20.494 "uuid": "6248259e-3385-46e4-8d77-378e0e50ee21", 00:32:20.494 "strip_size_kb": 0, 00:32:20.494 "state": "online", 00:32:20.494 "raid_level": "raid1", 00:32:20.494 "superblock": true, 00:32:20.494 "num_base_bdevs": 2, 00:32:20.494 "num_base_bdevs_discovered": 1, 00:32:20.494 "num_base_bdevs_operational": 1, 00:32:20.494 "base_bdevs_list": [ 00:32:20.494 { 00:32:20.494 "name": null, 00:32:20.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:20.494 "is_configured": false, 00:32:20.494 "data_offset": 0, 00:32:20.494 "data_size": 7936 00:32:20.494 }, 00:32:20.494 { 00:32:20.494 "name": "BaseBdev2", 00:32:20.494 "uuid": "9a310089-3abd-5270-b85f-e4d582ad25da", 00:32:20.494 "is_configured": true, 00:32:20.494 "data_offset": 256, 00:32:20.494 "data_size": 7936 00:32:20.494 } 00:32:20.494 ] 00:32:20.494 }' 00:32:20.494 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:20.494 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:20.494 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:20.494 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:20.494 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:20.494 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:32:20.494 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:20.495 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:20.495 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:20.495 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:20.495 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:20.495 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:20.495 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.495 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:20.495 [2024-11-20 13:52:23.271491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:20.495 [2024-11-20 13:52:23.271710] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:20.495 [2024-11-20 13:52:23.271734] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:20.495 request: 00:32:20.495 { 00:32:20.495 "base_bdev": "BaseBdev1", 00:32:20.495 "raid_bdev": "raid_bdev1", 00:32:20.495 "method": "bdev_raid_add_base_bdev", 00:32:20.495 "req_id": 1 00:32:20.495 } 00:32:20.495 Got JSON-RPC error response 00:32:20.495 response: 00:32:20.495 { 00:32:20.495 "code": -22, 00:32:20.495 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:32:20.495 } 00:32:20.495 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:20.495 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:32:20.495 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:20.495 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:20.495 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:20.495 13:52:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:32:21.430 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:21.430 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:21.430 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:21.430 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:21.430 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:21.430 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:21.430 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:21.430 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:21.430 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:21.430 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:21.430 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:21.430 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:21.430 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.431 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:21.431 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.431 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:21.431 "name": "raid_bdev1", 00:32:21.431 "uuid": "6248259e-3385-46e4-8d77-378e0e50ee21", 00:32:21.431 "strip_size_kb": 0, 00:32:21.431 "state": "online", 00:32:21.431 "raid_level": "raid1", 00:32:21.431 "superblock": true, 00:32:21.431 "num_base_bdevs": 2, 00:32:21.431 "num_base_bdevs_discovered": 1, 00:32:21.431 "num_base_bdevs_operational": 1, 00:32:21.431 "base_bdevs_list": [ 00:32:21.431 { 00:32:21.431 "name": null, 00:32:21.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.431 "is_configured": false, 00:32:21.431 "data_offset": 0, 00:32:21.431 "data_size": 7936 00:32:21.431 }, 00:32:21.431 { 00:32:21.431 "name": "BaseBdev2", 00:32:21.431 "uuid": "9a310089-3abd-5270-b85f-e4d582ad25da", 00:32:21.431 "is_configured": true, 00:32:21.431 "data_offset": 256, 00:32:21.431 "data_size": 7936 00:32:21.431 } 00:32:21.431 ] 00:32:21.431 }' 00:32:21.431 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:21.431 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:21.997 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:21.997 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:21.997 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:21.997 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:21.997 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:21.997 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:21.997 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:21.997 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.997 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:21.997 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.997 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:21.997 "name": "raid_bdev1", 00:32:21.997 "uuid": "6248259e-3385-46e4-8d77-378e0e50ee21", 00:32:21.997 "strip_size_kb": 0, 00:32:21.997 "state": "online", 00:32:21.997 "raid_level": "raid1", 00:32:21.997 "superblock": true, 00:32:21.997 "num_base_bdevs": 2, 00:32:21.997 "num_base_bdevs_discovered": 1, 00:32:21.997 "num_base_bdevs_operational": 1, 00:32:21.997 "base_bdevs_list": [ 00:32:21.997 { 00:32:21.997 "name": null, 00:32:21.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.997 "is_configured": false, 00:32:21.997 "data_offset": 0, 00:32:21.997 "data_size": 7936 00:32:21.997 }, 00:32:21.997 { 00:32:21.997 "name": "BaseBdev2", 00:32:21.997 "uuid": "9a310089-3abd-5270-b85f-e4d582ad25da", 00:32:21.997 "is_configured": true, 00:32:21.997 "data_offset": 256, 00:32:21.997 "data_size": 7936 00:32:21.997 } 00:32:21.997 ] 00:32:21.997 }' 00:32:21.997 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:21.997 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:21.997 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:22.255 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:22.255 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87172 00:32:22.255 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 87172 ']' 00:32:22.255 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 87172 00:32:22.255 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:32:22.255 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:22.255 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87172 00:32:22.255 killing process with pid 87172 00:32:22.255 Received shutdown signal, test time was about 60.000000 seconds 00:32:22.255 00:32:22.255 Latency(us) 00:32:22.255 [2024-11-20T13:52:25.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.255 [2024-11-20T13:52:25.172Z] =================================================================================================================== 00:32:22.255 [2024-11-20T13:52:25.172Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:22.255 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:22.255 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:22.255 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87172' 00:32:22.255 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 87172 00:32:22.255 [2024-11-20 13:52:24.977423] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:22.255 13:52:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 87172 00:32:22.255 [2024-11-20 13:52:24.977561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:22.255 [2024-11-20 13:52:24.977623] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:22.255 [2024-11-20 13:52:24.977640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:32:22.514 [2024-11-20 13:52:25.210976] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:23.449 13:52:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:32:23.449 00:32:23.449 real 0m21.351s 00:32:23.449 user 0m29.008s 00:32:23.449 sys 0m2.424s 00:32:23.449 13:52:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:23.449 13:52:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:23.449 ************************************ 00:32:23.449 END TEST raid_rebuild_test_sb_4k 00:32:23.449 ************************************ 00:32:23.449 13:52:26 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:32:23.449 13:52:26 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:32:23.449 13:52:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:23.449 13:52:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:23.449 13:52:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:23.449 ************************************ 00:32:23.449 START TEST raid_state_function_test_sb_md_separate 00:32:23.449 ************************************ 00:32:23.449 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:32:23.449 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:32:23.449 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:32:23.449 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:32:23.449 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:32:23.449 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:32:23.449 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:23.449 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:32:23.449 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:23.449 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:23.449 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:32:23.449 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:23.449 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:23.449 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:23.449 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:32:23.450 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:32:23.450 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:32:23.450 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:32:23.450 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:32:23.450 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:32:23.450 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:32:23.450 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:32:23.450 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:32:23.450 Process raid pid: 87875 00:32:23.450 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87875 00:32:23.450 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:23.450 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87875' 00:32:23.450 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87875 00:32:23.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:23.450 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87875 ']' 00:32:23.450 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:23.450 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:23.450 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:23.450 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:23.450 13:52:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:23.450 [2024-11-20 13:52:26.358635] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:32:23.450 [2024-11-20 13:52:26.358810] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:23.708 [2024-11-20 13:52:26.530859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.967 [2024-11-20 13:52:26.652213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.967 [2024-11-20 13:52:26.856794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:23.967 [2024-11-20 13:52:26.856835] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:24.534 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:24.534 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:32:24.534 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:24.534 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.534 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:24.534 [2024-11-20 13:52:27.395899] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:24.534 [2024-11-20 13:52:27.395991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:24.534 [2024-11-20 13:52:27.396024] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:24.534 [2024-11-20 13:52:27.396053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:24.534 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.534 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:24.534 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:24.534 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:24.534 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:24.534 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:24.534 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:24.534 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:24.534 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:24.534 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:24.534 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:24.534 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:24.534 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.534 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:24.534 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:24.534 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.792 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:24.792 "name": "Existed_Raid", 00:32:24.792 "uuid": "3f07b2d3-b16a-4da7-a5dd-1a110bff6b21", 00:32:24.792 "strip_size_kb": 0, 00:32:24.792 "state": "configuring", 00:32:24.792 "raid_level": "raid1", 00:32:24.792 "superblock": true, 00:32:24.792 "num_base_bdevs": 2, 00:32:24.792 "num_base_bdevs_discovered": 0, 00:32:24.792 "num_base_bdevs_operational": 2, 00:32:24.792 "base_bdevs_list": [ 00:32:24.792 { 00:32:24.792 "name": "BaseBdev1", 00:32:24.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:24.792 "is_configured": false, 00:32:24.792 "data_offset": 0, 00:32:24.792 "data_size": 0 00:32:24.792 }, 00:32:24.792 { 00:32:24.792 "name": "BaseBdev2", 00:32:24.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:24.792 "is_configured": false, 00:32:24.792 "data_offset": 0, 00:32:24.792 "data_size": 0 00:32:24.792 } 00:32:24.792 ] 00:32:24.792 }' 00:32:24.792 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:24.792 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:25.051 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:25.051 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.051 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:25.051 [2024-11-20 13:52:27.895949] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:25.051 [2024-11-20 13:52:27.895999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:32:25.051 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.051 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:25.051 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.051 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:25.051 [2024-11-20 13:52:27.907970] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:25.051 [2024-11-20 13:52:27.908205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:25.051 [2024-11-20 13:52:27.908228] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:25.051 [2024-11-20 13:52:27.908248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:25.051 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.051 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:32:25.051 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.051 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:25.051 [2024-11-20 13:52:27.958629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:25.051 BaseBdev1 00:32:25.051 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.051 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:32:25.051 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:32:25.051 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:25.051 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:32:25.051 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:25.051 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:25.051 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:25.051 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.051 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:25.310 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.310 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:25.310 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.310 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:25.310 [ 00:32:25.310 { 00:32:25.310 "name": "BaseBdev1", 00:32:25.310 "aliases": [ 00:32:25.310 "11111a05-64ea-491c-9a02-9e6818134eff" 00:32:25.310 ], 00:32:25.310 "product_name": "Malloc disk", 00:32:25.310 "block_size": 4096, 00:32:25.310 "num_blocks": 8192, 00:32:25.310 "uuid": "11111a05-64ea-491c-9a02-9e6818134eff", 00:32:25.310 "md_size": 32, 00:32:25.310 "md_interleave": false, 00:32:25.310 "dif_type": 0, 00:32:25.310 "assigned_rate_limits": { 00:32:25.310 "rw_ios_per_sec": 0, 00:32:25.310 "rw_mbytes_per_sec": 0, 00:32:25.310 "r_mbytes_per_sec": 0, 00:32:25.310 "w_mbytes_per_sec": 0 00:32:25.310 }, 00:32:25.310 "claimed": true, 00:32:25.310 "claim_type": "exclusive_write", 00:32:25.310 "zoned": false, 00:32:25.310 "supported_io_types": { 00:32:25.310 "read": true, 00:32:25.310 "write": true, 00:32:25.310 "unmap": true, 00:32:25.310 "flush": true, 00:32:25.310 "reset": true, 00:32:25.310 "nvme_admin": false, 00:32:25.310 "nvme_io": false, 00:32:25.310 "nvme_io_md": false, 00:32:25.310 "write_zeroes": true, 00:32:25.310 "zcopy": true, 00:32:25.310 "get_zone_info": false, 00:32:25.310 "zone_management": false, 00:32:25.310 "zone_append": false, 00:32:25.310 "compare": false, 00:32:25.310 "compare_and_write": false, 00:32:25.310 "abort": true, 00:32:25.310 "seek_hole": false, 00:32:25.310 "seek_data": false, 00:32:25.310 "copy": true, 00:32:25.310 "nvme_iov_md": false 00:32:25.310 }, 00:32:25.310 "memory_domains": [ 00:32:25.310 { 00:32:25.310 "dma_device_id": "system", 00:32:25.310 "dma_device_type": 1 00:32:25.310 }, 00:32:25.310 { 00:32:25.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:25.310 "dma_device_type": 2 00:32:25.310 } 00:32:25.310 ], 00:32:25.310 "driver_specific": {} 00:32:25.311 } 00:32:25.311 ] 00:32:25.311 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.311 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:32:25.311 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:25.311 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:25.311 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:25.311 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:25.311 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:25.311 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:25.311 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:25.311 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:25.311 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:25.311 13:52:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:25.311 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:25.311 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:25.311 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.311 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:25.311 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.311 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:25.311 "name": "Existed_Raid", 00:32:25.311 "uuid": "11e90c72-7032-4686-b28a-ad5e5f9753a4", 00:32:25.311 "strip_size_kb": 0, 00:32:25.311 "state": "configuring", 00:32:25.311 "raid_level": "raid1", 00:32:25.311 "superblock": true, 00:32:25.311 "num_base_bdevs": 2, 00:32:25.311 "num_base_bdevs_discovered": 1, 00:32:25.311 "num_base_bdevs_operational": 2, 00:32:25.311 "base_bdevs_list": [ 00:32:25.311 { 00:32:25.311 "name": "BaseBdev1", 00:32:25.311 "uuid": "11111a05-64ea-491c-9a02-9e6818134eff", 00:32:25.311 "is_configured": true, 00:32:25.311 "data_offset": 256, 00:32:25.311 "data_size": 7936 00:32:25.311 }, 00:32:25.311 { 00:32:25.311 "name": "BaseBdev2", 00:32:25.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.311 "is_configured": false, 00:32:25.311 "data_offset": 0, 00:32:25.311 "data_size": 0 00:32:25.311 } 00:32:25.311 ] 00:32:25.311 }' 00:32:25.311 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:25.311 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:25.880 [2024-11-20 13:52:28.550845] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:25.880 [2024-11-20 13:52:28.550916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:25.880 [2024-11-20 13:52:28.558886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:25.880 [2024-11-20 13:52:28.561196] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:25.880 [2024-11-20 13:52:28.561260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:25.880 "name": "Existed_Raid", 00:32:25.880 "uuid": "b7d8030b-5d0b-48c4-967d-6a8a2dbed140", 00:32:25.880 "strip_size_kb": 0, 00:32:25.880 "state": "configuring", 00:32:25.880 "raid_level": "raid1", 00:32:25.880 "superblock": true, 00:32:25.880 "num_base_bdevs": 2, 00:32:25.880 "num_base_bdevs_discovered": 1, 00:32:25.880 "num_base_bdevs_operational": 2, 00:32:25.880 "base_bdevs_list": [ 00:32:25.880 { 00:32:25.880 "name": "BaseBdev1", 00:32:25.880 "uuid": "11111a05-64ea-491c-9a02-9e6818134eff", 00:32:25.880 "is_configured": true, 00:32:25.880 "data_offset": 256, 00:32:25.880 "data_size": 7936 00:32:25.880 }, 00:32:25.880 { 00:32:25.880 "name": "BaseBdev2", 00:32:25.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:25.880 "is_configured": false, 00:32:25.880 "data_offset": 0, 00:32:25.880 "data_size": 0 00:32:25.880 } 00:32:25.880 ] 00:32:25.880 }' 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:25.880 13:52:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:26.448 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:32:26.448 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.448 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:26.448 [2024-11-20 13:52:29.133704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:26.448 [2024-11-20 13:52:29.134251] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:26.448 [2024-11-20 13:52:29.134281] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:26.448 [2024-11-20 13:52:29.134387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:26.448 BaseBdev2 00:32:26.448 [2024-11-20 13:52:29.134553] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:26.448 [2024-11-20 13:52:29.134584] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:32:26.448 [2024-11-20 13:52:29.134698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:26.448 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.448 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:32:26.448 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:32:26.448 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:26.448 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:32:26.448 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:26.448 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:26.448 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:26.448 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.448 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:26.448 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.448 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:26.448 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.448 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:26.448 [ 00:32:26.448 { 00:32:26.449 "name": "BaseBdev2", 00:32:26.449 "aliases": [ 00:32:26.449 "cd1ec9f1-3579-40bd-819f-72bd3b59e220" 00:32:26.449 ], 00:32:26.449 "product_name": "Malloc disk", 00:32:26.449 "block_size": 4096, 00:32:26.449 "num_blocks": 8192, 00:32:26.449 "uuid": "cd1ec9f1-3579-40bd-819f-72bd3b59e220", 00:32:26.449 "md_size": 32, 00:32:26.449 "md_interleave": false, 00:32:26.449 "dif_type": 0, 00:32:26.449 "assigned_rate_limits": { 00:32:26.449 "rw_ios_per_sec": 0, 00:32:26.449 "rw_mbytes_per_sec": 0, 00:32:26.449 "r_mbytes_per_sec": 0, 00:32:26.449 "w_mbytes_per_sec": 0 00:32:26.449 }, 00:32:26.449 "claimed": true, 00:32:26.449 "claim_type": "exclusive_write", 00:32:26.449 "zoned": false, 00:32:26.449 "supported_io_types": { 00:32:26.449 "read": true, 00:32:26.449 "write": true, 00:32:26.449 "unmap": true, 00:32:26.449 "flush": true, 00:32:26.449 "reset": true, 00:32:26.449 "nvme_admin": false, 00:32:26.449 "nvme_io": false, 00:32:26.449 "nvme_io_md": false, 00:32:26.449 "write_zeroes": true, 00:32:26.449 "zcopy": true, 00:32:26.449 "get_zone_info": false, 00:32:26.449 "zone_management": false, 00:32:26.449 "zone_append": false, 00:32:26.449 "compare": false, 00:32:26.449 "compare_and_write": false, 00:32:26.449 "abort": true, 00:32:26.449 "seek_hole": false, 00:32:26.449 "seek_data": false, 00:32:26.449 "copy": true, 00:32:26.449 "nvme_iov_md": false 00:32:26.449 }, 00:32:26.449 "memory_domains": [ 00:32:26.449 { 00:32:26.449 "dma_device_id": "system", 00:32:26.449 "dma_device_type": 1 00:32:26.449 }, 00:32:26.449 { 00:32:26.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:26.449 "dma_device_type": 2 00:32:26.449 } 00:32:26.449 ], 00:32:26.449 "driver_specific": {} 00:32:26.449 } 00:32:26.449 ] 00:32:26.449 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.449 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:32:26.449 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:26.449 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:26.449 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:32:26.449 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:26.449 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:26.449 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:26.449 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:26.449 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:26.449 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:26.449 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:26.449 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:26.449 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:26.449 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:26.449 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.449 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:26.449 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:26.449 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.449 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:26.449 "name": "Existed_Raid", 00:32:26.449 "uuid": "b7d8030b-5d0b-48c4-967d-6a8a2dbed140", 00:32:26.449 "strip_size_kb": 0, 00:32:26.449 "state": "online", 00:32:26.449 "raid_level": "raid1", 00:32:26.449 "superblock": true, 00:32:26.449 "num_base_bdevs": 2, 00:32:26.449 "num_base_bdevs_discovered": 2, 00:32:26.449 "num_base_bdevs_operational": 2, 00:32:26.449 "base_bdevs_list": [ 00:32:26.449 { 00:32:26.449 "name": "BaseBdev1", 00:32:26.449 "uuid": "11111a05-64ea-491c-9a02-9e6818134eff", 00:32:26.449 "is_configured": true, 00:32:26.449 "data_offset": 256, 00:32:26.449 "data_size": 7936 00:32:26.449 }, 00:32:26.449 { 00:32:26.449 "name": "BaseBdev2", 00:32:26.449 "uuid": "cd1ec9f1-3579-40bd-819f-72bd3b59e220", 00:32:26.449 "is_configured": true, 00:32:26.449 "data_offset": 256, 00:32:26.449 "data_size": 7936 00:32:26.449 } 00:32:26.449 ] 00:32:26.449 }' 00:32:26.449 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:26.449 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:27.018 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:32:27.018 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:27.018 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:27.018 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:27.018 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:32:27.018 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:27.018 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:27.018 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:27.018 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.018 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:27.018 [2024-11-20 13:52:29.706415] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:27.018 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.018 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:27.018 "name": "Existed_Raid", 00:32:27.018 "aliases": [ 00:32:27.018 "b7d8030b-5d0b-48c4-967d-6a8a2dbed140" 00:32:27.018 ], 00:32:27.018 "product_name": "Raid Volume", 00:32:27.018 "block_size": 4096, 00:32:27.018 "num_blocks": 7936, 00:32:27.018 "uuid": "b7d8030b-5d0b-48c4-967d-6a8a2dbed140", 00:32:27.018 "md_size": 32, 00:32:27.018 "md_interleave": false, 00:32:27.018 "dif_type": 0, 00:32:27.018 "assigned_rate_limits": { 00:32:27.018 "rw_ios_per_sec": 0, 00:32:27.018 "rw_mbytes_per_sec": 0, 00:32:27.018 "r_mbytes_per_sec": 0, 00:32:27.018 "w_mbytes_per_sec": 0 00:32:27.018 }, 00:32:27.018 "claimed": false, 00:32:27.018 "zoned": false, 00:32:27.018 "supported_io_types": { 00:32:27.018 "read": true, 00:32:27.018 "write": true, 00:32:27.018 "unmap": false, 00:32:27.018 "flush": false, 00:32:27.018 "reset": true, 00:32:27.018 "nvme_admin": false, 00:32:27.018 "nvme_io": false, 00:32:27.018 "nvme_io_md": false, 00:32:27.018 "write_zeroes": true, 00:32:27.018 "zcopy": false, 00:32:27.018 "get_zone_info": false, 00:32:27.018 "zone_management": false, 00:32:27.018 "zone_append": false, 00:32:27.018 "compare": false, 00:32:27.018 "compare_and_write": false, 00:32:27.018 "abort": false, 00:32:27.018 "seek_hole": false, 00:32:27.018 "seek_data": false, 00:32:27.018 "copy": false, 00:32:27.018 "nvme_iov_md": false 00:32:27.019 }, 00:32:27.019 "memory_domains": [ 00:32:27.019 { 00:32:27.019 "dma_device_id": "system", 00:32:27.019 "dma_device_type": 1 00:32:27.019 }, 00:32:27.019 { 00:32:27.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:27.019 "dma_device_type": 2 00:32:27.019 }, 00:32:27.019 { 00:32:27.019 "dma_device_id": "system", 00:32:27.019 "dma_device_type": 1 00:32:27.019 }, 00:32:27.019 { 00:32:27.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:27.019 "dma_device_type": 2 00:32:27.019 } 00:32:27.019 ], 00:32:27.019 "driver_specific": { 00:32:27.019 "raid": { 00:32:27.019 "uuid": "b7d8030b-5d0b-48c4-967d-6a8a2dbed140", 00:32:27.019 "strip_size_kb": 0, 00:32:27.019 "state": "online", 00:32:27.019 "raid_level": "raid1", 00:32:27.019 "superblock": true, 00:32:27.019 "num_base_bdevs": 2, 00:32:27.019 "num_base_bdevs_discovered": 2, 00:32:27.019 "num_base_bdevs_operational": 2, 00:32:27.019 "base_bdevs_list": [ 00:32:27.019 { 00:32:27.019 "name": "BaseBdev1", 00:32:27.019 "uuid": "11111a05-64ea-491c-9a02-9e6818134eff", 00:32:27.019 "is_configured": true, 00:32:27.019 "data_offset": 256, 00:32:27.019 "data_size": 7936 00:32:27.019 }, 00:32:27.019 { 00:32:27.019 "name": "BaseBdev2", 00:32:27.019 "uuid": "cd1ec9f1-3579-40bd-819f-72bd3b59e220", 00:32:27.019 "is_configured": true, 00:32:27.019 "data_offset": 256, 00:32:27.019 "data_size": 7936 00:32:27.019 } 00:32:27.019 ] 00:32:27.019 } 00:32:27.019 } 00:32:27.019 }' 00:32:27.019 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:27.019 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:32:27.019 BaseBdev2' 00:32:27.019 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:27.019 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:32:27.019 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:27.019 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:32:27.019 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:27.019 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.019 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:27.019 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.019 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:32:27.019 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:32:27.019 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:27.019 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:27.019 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.019 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:27.019 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:27.278 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.279 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:32:27.279 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:32:27.279 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:27.279 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.279 13:52:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:27.279 [2024-11-20 13:52:29.970056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:27.279 "name": "Existed_Raid", 00:32:27.279 "uuid": "b7d8030b-5d0b-48c4-967d-6a8a2dbed140", 00:32:27.279 "strip_size_kb": 0, 00:32:27.279 "state": "online", 00:32:27.279 "raid_level": "raid1", 00:32:27.279 "superblock": true, 00:32:27.279 "num_base_bdevs": 2, 00:32:27.279 "num_base_bdevs_discovered": 1, 00:32:27.279 "num_base_bdevs_operational": 1, 00:32:27.279 "base_bdevs_list": [ 00:32:27.279 { 00:32:27.279 "name": null, 00:32:27.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.279 "is_configured": false, 00:32:27.279 "data_offset": 0, 00:32:27.279 "data_size": 7936 00:32:27.279 }, 00:32:27.279 { 00:32:27.279 "name": "BaseBdev2", 00:32:27.279 "uuid": "cd1ec9f1-3579-40bd-819f-72bd3b59e220", 00:32:27.279 "is_configured": true, 00:32:27.279 "data_offset": 256, 00:32:27.279 "data_size": 7936 00:32:27.279 } 00:32:27.279 ] 00:32:27.279 }' 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:27.279 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:27.847 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:32:27.847 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:27.847 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:27.847 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:27.847 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.847 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:27.847 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.847 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:27.847 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:27.847 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:32:27.847 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.847 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:27.847 [2024-11-20 13:52:30.663496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:27.847 [2024-11-20 13:52:30.663616] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:27.847 [2024-11-20 13:52:30.749355] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:27.847 [2024-11-20 13:52:30.749653] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:27.847 [2024-11-20 13:52:30.749817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:32:27.847 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.847 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:27.847 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:27.847 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:27.847 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:32:27.847 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.847 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:28.106 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.106 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:32:28.106 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:32:28.106 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:32:28.106 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87875 00:32:28.106 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87875 ']' 00:32:28.106 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87875 00:32:28.106 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:32:28.106 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:28.106 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87875 00:32:28.106 killing process with pid 87875 00:32:28.106 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:28.106 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:28.106 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87875' 00:32:28.106 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87875 00:32:28.106 [2024-11-20 13:52:30.841998] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:28.106 13:52:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87875 00:32:28.106 [2024-11-20 13:52:30.856580] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:29.043 ************************************ 00:32:29.043 END TEST raid_state_function_test_sb_md_separate 00:32:29.043 ************************************ 00:32:29.043 13:52:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:32:29.043 00:32:29.043 real 0m5.600s 00:32:29.043 user 0m8.513s 00:32:29.043 sys 0m0.812s 00:32:29.043 13:52:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:29.043 13:52:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:29.043 13:52:31 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:32:29.043 13:52:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:29.043 13:52:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:29.043 13:52:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:29.043 ************************************ 00:32:29.043 START TEST raid_superblock_test_md_separate 00:32:29.043 ************************************ 00:32:29.043 13:52:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=88128 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 88128 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88128 ']' 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:29.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:29.044 13:52:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:29.354 [2024-11-20 13:52:32.033361] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:32:29.354 [2024-11-20 13:52:32.033545] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88128 ] 00:32:29.354 [2024-11-20 13:52:32.221332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.614 [2024-11-20 13:52:32.378654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:29.880 [2024-11-20 13:52:32.577133] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:29.880 [2024-11-20 13:52:32.577206] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:30.139 13:52:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:30.139 13:52:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:32:30.139 13:52:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:32:30.139 13:52:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:30.139 13:52:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:32:30.139 13:52:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:32:30.139 13:52:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:30.139 13:52:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:30.139 13:52:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:30.139 13:52:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:30.139 13:52:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:32:30.139 13:52:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.139 13:52:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.139 malloc1 00:32:30.139 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.139 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:30.139 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.139 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.139 [2024-11-20 13:52:33.046631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:30.139 [2024-11-20 13:52:33.047029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:30.139 [2024-11-20 13:52:33.047112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:30.139 [2024-11-20 13:52:33.047374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:30.139 [2024-11-20 13:52:33.049896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:30.139 [2024-11-20 13:52:33.050118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:30.139 pt1 00:32:30.139 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.399 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:30.399 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:30.399 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:32:30.399 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:32:30.399 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:30.399 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:30.399 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:30.399 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:30.399 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:32:30.399 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.399 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.399 malloc2 00:32:30.399 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.399 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:30.399 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.399 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.400 [2024-11-20 13:52:33.105401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:30.400 [2024-11-20 13:52:33.105692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:30.400 [2024-11-20 13:52:33.105800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:30.400 [2024-11-20 13:52:33.105936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:30.400 [2024-11-20 13:52:33.108590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:30.400 [2024-11-20 13:52:33.108788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:30.400 pt2 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.400 [2024-11-20 13:52:33.117545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:30.400 [2024-11-20 13:52:33.120111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:30.400 [2024-11-20 13:52:33.120353] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:30.400 [2024-11-20 13:52:33.120373] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:30.400 [2024-11-20 13:52:33.120458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:30.400 [2024-11-20 13:52:33.120631] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:30.400 [2024-11-20 13:52:33.120650] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:30.400 [2024-11-20 13:52:33.120780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:30.400 "name": "raid_bdev1", 00:32:30.400 "uuid": "eb026f84-25e7-4766-8507-a4ee9eda3581", 00:32:30.400 "strip_size_kb": 0, 00:32:30.400 "state": "online", 00:32:30.400 "raid_level": "raid1", 00:32:30.400 "superblock": true, 00:32:30.400 "num_base_bdevs": 2, 00:32:30.400 "num_base_bdevs_discovered": 2, 00:32:30.400 "num_base_bdevs_operational": 2, 00:32:30.400 "base_bdevs_list": [ 00:32:30.400 { 00:32:30.400 "name": "pt1", 00:32:30.400 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:30.400 "is_configured": true, 00:32:30.400 "data_offset": 256, 00:32:30.400 "data_size": 7936 00:32:30.400 }, 00:32:30.400 { 00:32:30.400 "name": "pt2", 00:32:30.400 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:30.400 "is_configured": true, 00:32:30.400 "data_offset": 256, 00:32:30.400 "data_size": 7936 00:32:30.400 } 00:32:30.400 ] 00:32:30.400 }' 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:30.400 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.968 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:32:30.968 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:30.968 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:30.968 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:30.968 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:32:30.968 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:30.968 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:30.968 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:30.968 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.968 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.968 [2024-11-20 13:52:33.662406] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:30.968 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.968 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:30.968 "name": "raid_bdev1", 00:32:30.968 "aliases": [ 00:32:30.968 "eb026f84-25e7-4766-8507-a4ee9eda3581" 00:32:30.968 ], 00:32:30.968 "product_name": "Raid Volume", 00:32:30.968 "block_size": 4096, 00:32:30.968 "num_blocks": 7936, 00:32:30.968 "uuid": "eb026f84-25e7-4766-8507-a4ee9eda3581", 00:32:30.968 "md_size": 32, 00:32:30.968 "md_interleave": false, 00:32:30.968 "dif_type": 0, 00:32:30.968 "assigned_rate_limits": { 00:32:30.968 "rw_ios_per_sec": 0, 00:32:30.968 "rw_mbytes_per_sec": 0, 00:32:30.968 "r_mbytes_per_sec": 0, 00:32:30.968 "w_mbytes_per_sec": 0 00:32:30.968 }, 00:32:30.968 "claimed": false, 00:32:30.968 "zoned": false, 00:32:30.968 "supported_io_types": { 00:32:30.968 "read": true, 00:32:30.968 "write": true, 00:32:30.968 "unmap": false, 00:32:30.968 "flush": false, 00:32:30.968 "reset": true, 00:32:30.968 "nvme_admin": false, 00:32:30.968 "nvme_io": false, 00:32:30.968 "nvme_io_md": false, 00:32:30.968 "write_zeroes": true, 00:32:30.968 "zcopy": false, 00:32:30.968 "get_zone_info": false, 00:32:30.968 "zone_management": false, 00:32:30.968 "zone_append": false, 00:32:30.968 "compare": false, 00:32:30.968 "compare_and_write": false, 00:32:30.969 "abort": false, 00:32:30.969 "seek_hole": false, 00:32:30.969 "seek_data": false, 00:32:30.969 "copy": false, 00:32:30.969 "nvme_iov_md": false 00:32:30.969 }, 00:32:30.969 "memory_domains": [ 00:32:30.969 { 00:32:30.969 "dma_device_id": "system", 00:32:30.969 "dma_device_type": 1 00:32:30.969 }, 00:32:30.969 { 00:32:30.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:30.969 "dma_device_type": 2 00:32:30.969 }, 00:32:30.969 { 00:32:30.969 "dma_device_id": "system", 00:32:30.969 "dma_device_type": 1 00:32:30.969 }, 00:32:30.969 { 00:32:30.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:30.969 "dma_device_type": 2 00:32:30.969 } 00:32:30.969 ], 00:32:30.969 "driver_specific": { 00:32:30.969 "raid": { 00:32:30.969 "uuid": "eb026f84-25e7-4766-8507-a4ee9eda3581", 00:32:30.969 "strip_size_kb": 0, 00:32:30.969 "state": "online", 00:32:30.969 "raid_level": "raid1", 00:32:30.969 "superblock": true, 00:32:30.969 "num_base_bdevs": 2, 00:32:30.969 "num_base_bdevs_discovered": 2, 00:32:30.969 "num_base_bdevs_operational": 2, 00:32:30.969 "base_bdevs_list": [ 00:32:30.969 { 00:32:30.969 "name": "pt1", 00:32:30.969 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:30.969 "is_configured": true, 00:32:30.969 "data_offset": 256, 00:32:30.969 "data_size": 7936 00:32:30.969 }, 00:32:30.969 { 00:32:30.969 "name": "pt2", 00:32:30.969 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:30.969 "is_configured": true, 00:32:30.969 "data_offset": 256, 00:32:30.969 "data_size": 7936 00:32:30.969 } 00:32:30.969 ] 00:32:30.969 } 00:32:30.969 } 00:32:30.969 }' 00:32:30.969 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:30.969 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:30.969 pt2' 00:32:30.969 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:30.969 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:32:30.969 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:30.969 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:30.969 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.969 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.969 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:30.969 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.969 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:32:30.969 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:32:30.969 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:30.969 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:30.969 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.969 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.969 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:30.969 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.229 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:32:31.229 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:32:31.229 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:31.229 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.229 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:31.229 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:32:31.229 [2024-11-20 13:52:33.926146] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:31.229 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.229 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=eb026f84-25e7-4766-8507-a4ee9eda3581 00:32:31.229 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z eb026f84-25e7-4766-8507-a4ee9eda3581 ']' 00:32:31.229 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:31.229 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.229 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:31.229 [2024-11-20 13:52:33.969791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:31.229 [2024-11-20 13:52:33.969836] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:31.229 [2024-11-20 13:52:33.970086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:31.229 [2024-11-20 13:52:33.970213] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:31.229 [2024-11-20 13:52:33.970254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:31.229 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.229 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:31.229 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.229 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:31.229 13:52:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:32:31.229 13:52:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.229 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:31.229 [2024-11-20 13:52:34.121843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:31.229 [2024-11-20 13:52:34.124692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:31.229 [2024-11-20 13:52:34.125419] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:32:31.229 [2024-11-20 13:52:34.125573] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:32:31.229 [2024-11-20 13:52:34.125603] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:31.229 [2024-11-20 13:52:34.125621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:32:31.229 request: 00:32:31.229 { 00:32:31.229 "name": "raid_bdev1", 00:32:31.229 "raid_level": "raid1", 00:32:31.229 "base_bdevs": [ 00:32:31.229 "malloc1", 00:32:31.229 "malloc2" 00:32:31.229 ], 00:32:31.229 "superblock": false, 00:32:31.229 "method": "bdev_raid_create", 00:32:31.229 "req_id": 1 00:32:31.229 } 00:32:31.229 Got JSON-RPC error response 00:32:31.229 response: 00:32:31.229 { 00:32:31.229 "code": -17, 00:32:31.229 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:31.229 } 00:32:31.230 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:31.230 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:32:31.230 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:31.230 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:31.230 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:31.230 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:31.230 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.230 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:31.230 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:32:31.489 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.489 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:32:31.489 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:32:31.489 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:31.489 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.489 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:31.489 [2024-11-20 13:52:34.194102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:31.489 [2024-11-20 13:52:34.194213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:31.489 [2024-11-20 13:52:34.194244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:32:31.489 [2024-11-20 13:52:34.194269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:31.489 [2024-11-20 13:52:34.197427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:31.489 [2024-11-20 13:52:34.197476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:31.489 [2024-11-20 13:52:34.197555] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:31.489 [2024-11-20 13:52:34.197687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:31.489 pt1 00:32:31.489 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.489 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:32:31.489 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:31.489 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:31.489 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:31.489 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:31.489 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:31.489 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:31.489 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:31.489 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:31.489 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:31.489 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:31.489 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:31.489 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.489 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:31.489 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.489 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:31.489 "name": "raid_bdev1", 00:32:31.489 "uuid": "eb026f84-25e7-4766-8507-a4ee9eda3581", 00:32:31.489 "strip_size_kb": 0, 00:32:31.489 "state": "configuring", 00:32:31.489 "raid_level": "raid1", 00:32:31.489 "superblock": true, 00:32:31.489 "num_base_bdevs": 2, 00:32:31.489 "num_base_bdevs_discovered": 1, 00:32:31.489 "num_base_bdevs_operational": 2, 00:32:31.489 "base_bdevs_list": [ 00:32:31.489 { 00:32:31.489 "name": "pt1", 00:32:31.489 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:31.489 "is_configured": true, 00:32:31.489 "data_offset": 256, 00:32:31.489 "data_size": 7936 00:32:31.489 }, 00:32:31.489 { 00:32:31.489 "name": null, 00:32:31.489 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:31.489 "is_configured": false, 00:32:31.489 "data_offset": 256, 00:32:31.489 "data_size": 7936 00:32:31.489 } 00:32:31.490 ] 00:32:31.490 }' 00:32:31.490 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:31.490 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:32.058 [2024-11-20 13:52:34.730250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:32.058 [2024-11-20 13:52:34.730386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:32.058 [2024-11-20 13:52:34.730452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:32:32.058 [2024-11-20 13:52:34.730477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:32.058 [2024-11-20 13:52:34.730855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:32.058 [2024-11-20 13:52:34.730888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:32.058 [2024-11-20 13:52:34.731010] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:32.058 [2024-11-20 13:52:34.731050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:32.058 [2024-11-20 13:52:34.731249] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:32.058 [2024-11-20 13:52:34.731302] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:32.058 [2024-11-20 13:52:34.731424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:32:32.058 [2024-11-20 13:52:34.731610] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:32.058 [2024-11-20 13:52:34.731642] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:32:32.058 [2024-11-20 13:52:34.731840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:32.058 pt2 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:32.058 "name": "raid_bdev1", 00:32:32.058 "uuid": "eb026f84-25e7-4766-8507-a4ee9eda3581", 00:32:32.058 "strip_size_kb": 0, 00:32:32.058 "state": "online", 00:32:32.058 "raid_level": "raid1", 00:32:32.058 "superblock": true, 00:32:32.058 "num_base_bdevs": 2, 00:32:32.058 "num_base_bdevs_discovered": 2, 00:32:32.058 "num_base_bdevs_operational": 2, 00:32:32.058 "base_bdevs_list": [ 00:32:32.058 { 00:32:32.058 "name": "pt1", 00:32:32.058 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:32.058 "is_configured": true, 00:32:32.058 "data_offset": 256, 00:32:32.058 "data_size": 7936 00:32:32.058 }, 00:32:32.058 { 00:32:32.058 "name": "pt2", 00:32:32.058 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:32.058 "is_configured": true, 00:32:32.058 "data_offset": 256, 00:32:32.058 "data_size": 7936 00:32:32.058 } 00:32:32.058 ] 00:32:32.058 }' 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:32.058 13:52:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:32.624 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:32:32.624 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:32.624 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:32.624 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:32.624 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:32:32.624 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:32.624 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:32.624 13:52:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.624 13:52:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:32.624 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:32.624 [2024-11-20 13:52:35.270821] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:32.624 13:52:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.624 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:32.624 "name": "raid_bdev1", 00:32:32.624 "aliases": [ 00:32:32.624 "eb026f84-25e7-4766-8507-a4ee9eda3581" 00:32:32.624 ], 00:32:32.625 "product_name": "Raid Volume", 00:32:32.625 "block_size": 4096, 00:32:32.625 "num_blocks": 7936, 00:32:32.625 "uuid": "eb026f84-25e7-4766-8507-a4ee9eda3581", 00:32:32.625 "md_size": 32, 00:32:32.625 "md_interleave": false, 00:32:32.625 "dif_type": 0, 00:32:32.625 "assigned_rate_limits": { 00:32:32.625 "rw_ios_per_sec": 0, 00:32:32.625 "rw_mbytes_per_sec": 0, 00:32:32.625 "r_mbytes_per_sec": 0, 00:32:32.625 "w_mbytes_per_sec": 0 00:32:32.625 }, 00:32:32.625 "claimed": false, 00:32:32.625 "zoned": false, 00:32:32.625 "supported_io_types": { 00:32:32.625 "read": true, 00:32:32.625 "write": true, 00:32:32.625 "unmap": false, 00:32:32.625 "flush": false, 00:32:32.625 "reset": true, 00:32:32.625 "nvme_admin": false, 00:32:32.625 "nvme_io": false, 00:32:32.625 "nvme_io_md": false, 00:32:32.625 "write_zeroes": true, 00:32:32.625 "zcopy": false, 00:32:32.625 "get_zone_info": false, 00:32:32.625 "zone_management": false, 00:32:32.625 "zone_append": false, 00:32:32.625 "compare": false, 00:32:32.625 "compare_and_write": false, 00:32:32.625 "abort": false, 00:32:32.625 "seek_hole": false, 00:32:32.625 "seek_data": false, 00:32:32.625 "copy": false, 00:32:32.625 "nvme_iov_md": false 00:32:32.625 }, 00:32:32.625 "memory_domains": [ 00:32:32.625 { 00:32:32.625 "dma_device_id": "system", 00:32:32.625 "dma_device_type": 1 00:32:32.625 }, 00:32:32.625 { 00:32:32.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:32.625 "dma_device_type": 2 00:32:32.625 }, 00:32:32.625 { 00:32:32.625 "dma_device_id": "system", 00:32:32.625 "dma_device_type": 1 00:32:32.625 }, 00:32:32.625 { 00:32:32.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:32.625 "dma_device_type": 2 00:32:32.625 } 00:32:32.625 ], 00:32:32.625 "driver_specific": { 00:32:32.625 "raid": { 00:32:32.625 "uuid": "eb026f84-25e7-4766-8507-a4ee9eda3581", 00:32:32.625 "strip_size_kb": 0, 00:32:32.625 "state": "online", 00:32:32.625 "raid_level": "raid1", 00:32:32.625 "superblock": true, 00:32:32.625 "num_base_bdevs": 2, 00:32:32.625 "num_base_bdevs_discovered": 2, 00:32:32.625 "num_base_bdevs_operational": 2, 00:32:32.625 "base_bdevs_list": [ 00:32:32.625 { 00:32:32.625 "name": "pt1", 00:32:32.625 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:32.625 "is_configured": true, 00:32:32.625 "data_offset": 256, 00:32:32.625 "data_size": 7936 00:32:32.625 }, 00:32:32.625 { 00:32:32.625 "name": "pt2", 00:32:32.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:32.625 "is_configured": true, 00:32:32.625 "data_offset": 256, 00:32:32.625 "data_size": 7936 00:32:32.625 } 00:32:32.625 ] 00:32:32.625 } 00:32:32.625 } 00:32:32.625 }' 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:32.625 pt2' 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.625 13:52:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:32.625 [2024-11-20 13:52:35.522899] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' eb026f84-25e7-4766-8507-a4ee9eda3581 '!=' eb026f84-25e7-4766-8507-a4ee9eda3581 ']' 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:32.883 [2024-11-20 13:52:35.570644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:32.883 "name": "raid_bdev1", 00:32:32.883 "uuid": "eb026f84-25e7-4766-8507-a4ee9eda3581", 00:32:32.883 "strip_size_kb": 0, 00:32:32.883 "state": "online", 00:32:32.883 "raid_level": "raid1", 00:32:32.883 "superblock": true, 00:32:32.883 "num_base_bdevs": 2, 00:32:32.883 "num_base_bdevs_discovered": 1, 00:32:32.883 "num_base_bdevs_operational": 1, 00:32:32.883 "base_bdevs_list": [ 00:32:32.883 { 00:32:32.883 "name": null, 00:32:32.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:32.883 "is_configured": false, 00:32:32.883 "data_offset": 0, 00:32:32.883 "data_size": 7936 00:32:32.883 }, 00:32:32.883 { 00:32:32.883 "name": "pt2", 00:32:32.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:32.883 "is_configured": true, 00:32:32.883 "data_offset": 256, 00:32:32.883 "data_size": 7936 00:32:32.883 } 00:32:32.883 ] 00:32:32.883 }' 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:32.883 13:52:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:33.450 [2024-11-20 13:52:36.086724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:33.450 [2024-11-20 13:52:36.086762] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:33.450 [2024-11-20 13:52:36.086872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:33.450 [2024-11-20 13:52:36.086964] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:33.450 [2024-11-20 13:52:36.086999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.450 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:33.450 [2024-11-20 13:52:36.162664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:33.450 [2024-11-20 13:52:36.162744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:33.450 [2024-11-20 13:52:36.162771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:32:33.450 [2024-11-20 13:52:36.162788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:33.450 [2024-11-20 13:52:36.165843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:33.450 [2024-11-20 13:52:36.165925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:33.450 [2024-11-20 13:52:36.166014] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:33.450 [2024-11-20 13:52:36.166126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:33.450 [2024-11-20 13:52:36.166263] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:32:33.450 [2024-11-20 13:52:36.166285] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:33.450 [2024-11-20 13:52:36.166399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:33.451 [2024-11-20 13:52:36.166550] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:32:33.451 [2024-11-20 13:52:36.166572] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:32:33.451 [2024-11-20 13:52:36.166809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:33.451 pt2 00:32:33.451 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.451 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:33.451 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:33.451 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:33.451 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:33.451 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:33.451 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:33.451 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:33.451 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:33.451 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:33.451 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:33.451 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:33.451 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:33.451 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.451 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:33.451 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.451 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:33.451 "name": "raid_bdev1", 00:32:33.451 "uuid": "eb026f84-25e7-4766-8507-a4ee9eda3581", 00:32:33.451 "strip_size_kb": 0, 00:32:33.451 "state": "online", 00:32:33.451 "raid_level": "raid1", 00:32:33.451 "superblock": true, 00:32:33.451 "num_base_bdevs": 2, 00:32:33.451 "num_base_bdevs_discovered": 1, 00:32:33.451 "num_base_bdevs_operational": 1, 00:32:33.451 "base_bdevs_list": [ 00:32:33.451 { 00:32:33.451 "name": null, 00:32:33.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:33.451 "is_configured": false, 00:32:33.451 "data_offset": 256, 00:32:33.451 "data_size": 7936 00:32:33.451 }, 00:32:33.451 { 00:32:33.451 "name": "pt2", 00:32:33.451 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:33.451 "is_configured": true, 00:32:33.451 "data_offset": 256, 00:32:33.451 "data_size": 7936 00:32:33.451 } 00:32:33.451 ] 00:32:33.451 }' 00:32:33.451 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:33.451 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:34.019 [2024-11-20 13:52:36.702969] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:34.019 [2024-11-20 13:52:36.703011] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:34.019 [2024-11-20 13:52:36.703124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:34.019 [2024-11-20 13:52:36.703209] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:34.019 [2024-11-20 13:52:36.703227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:34.019 [2024-11-20 13:52:36.767009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:34.019 [2024-11-20 13:52:36.767090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:34.019 [2024-11-20 13:52:36.767138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:32:34.019 [2024-11-20 13:52:36.767155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:34.019 [2024-11-20 13:52:36.770131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:34.019 [2024-11-20 13:52:36.770176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:34.019 [2024-11-20 13:52:36.770308] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:34.019 [2024-11-20 13:52:36.770388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:34.019 [2024-11-20 13:52:36.770596] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:32:34.019 [2024-11-20 13:52:36.770615] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:34.019 [2024-11-20 13:52:36.770642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:32:34.019 [2024-11-20 13:52:36.770770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:34.019 [2024-11-20 13:52:36.770889] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:32:34.019 [2024-11-20 13:52:36.770920] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:34.019 [2024-11-20 13:52:36.771006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:32:34.019 [2024-11-20 13:52:36.771177] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:32:34.019 [2024-11-20 13:52:36.771198] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:32:34.019 [2024-11-20 13:52:36.771378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:34.019 pt1 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:34.019 "name": "raid_bdev1", 00:32:34.019 "uuid": "eb026f84-25e7-4766-8507-a4ee9eda3581", 00:32:34.019 "strip_size_kb": 0, 00:32:34.019 "state": "online", 00:32:34.019 "raid_level": "raid1", 00:32:34.019 "superblock": true, 00:32:34.019 "num_base_bdevs": 2, 00:32:34.019 "num_base_bdevs_discovered": 1, 00:32:34.019 "num_base_bdevs_operational": 1, 00:32:34.019 "base_bdevs_list": [ 00:32:34.019 { 00:32:34.019 "name": null, 00:32:34.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:34.019 "is_configured": false, 00:32:34.019 "data_offset": 256, 00:32:34.019 "data_size": 7936 00:32:34.019 }, 00:32:34.019 { 00:32:34.019 "name": "pt2", 00:32:34.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:34.019 "is_configured": true, 00:32:34.019 "data_offset": 256, 00:32:34.019 "data_size": 7936 00:32:34.019 } 00:32:34.019 ] 00:32:34.019 }' 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:34.019 13:52:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:34.588 13:52:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:32:34.588 13:52:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:32:34.588 13:52:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.588 13:52:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:34.588 13:52:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.588 13:52:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:32:34.588 13:52:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:34.588 13:52:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:34.588 13:52:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:34.588 13:52:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:32:34.588 [2024-11-20 13:52:37.359499] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:34.588 13:52:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:34.588 13:52:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' eb026f84-25e7-4766-8507-a4ee9eda3581 '!=' eb026f84-25e7-4766-8507-a4ee9eda3581 ']' 00:32:34.588 13:52:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 88128 00:32:34.588 13:52:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88128 ']' 00:32:34.588 13:52:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 88128 00:32:34.588 13:52:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:32:34.588 13:52:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:34.588 13:52:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88128 00:32:34.588 killing process with pid 88128 00:32:34.588 13:52:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:34.588 13:52:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:34.588 13:52:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88128' 00:32:34.588 13:52:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 88128 00:32:34.588 [2024-11-20 13:52:37.435125] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:34.588 [2024-11-20 13:52:37.435264] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:34.588 13:52:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 88128 00:32:34.588 [2024-11-20 13:52:37.435349] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:34.588 [2024-11-20 13:52:37.435377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:32:34.847 [2024-11-20 13:52:37.614608] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:35.787 13:52:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:32:35.787 00:32:35.787 real 0m6.670s 00:32:35.787 user 0m10.523s 00:32:35.787 sys 0m1.082s 00:32:35.787 ************************************ 00:32:35.787 END TEST raid_superblock_test_md_separate 00:32:35.787 ************************************ 00:32:35.787 13:52:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:35.787 13:52:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:35.787 13:52:38 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:32:35.787 13:52:38 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:32:35.787 13:52:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:32:35.787 13:52:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:35.787 13:52:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:35.787 ************************************ 00:32:35.787 START TEST raid_rebuild_test_sb_md_separate 00:32:35.787 ************************************ 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:32:35.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88456 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88456 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88456 ']' 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:35.787 13:52:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:36.057 [2024-11-20 13:52:38.761653] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:32:36.057 [2024-11-20 13:52:38.762143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88456 ] 00:32:36.057 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:36.057 Zero copy mechanism will not be used. 00:32:36.057 [2024-11-20 13:52:38.936338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.315 [2024-11-20 13:52:39.062739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.573 [2024-11-20 13:52:39.254429] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:36.573 [2024-11-20 13:52:39.254669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:37.141 BaseBdev1_malloc 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:37.141 [2024-11-20 13:52:39.829715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:37.141 [2024-11-20 13:52:39.830051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:37.141 [2024-11-20 13:52:39.830099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:37.141 [2024-11-20 13:52:39.830138] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:37.141 [2024-11-20 13:52:39.832696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:37.141 BaseBdev1 00:32:37.141 [2024-11-20 13:52:39.832925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:37.141 BaseBdev2_malloc 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:37.141 [2024-11-20 13:52:39.877851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:37.141 [2024-11-20 13:52:39.878109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:37.141 [2024-11-20 13:52:39.878219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:37.141 [2024-11-20 13:52:39.878249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:37.141 [2024-11-20 13:52:39.880840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:37.141 [2024-11-20 13:52:39.880924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:37.141 BaseBdev2 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:37.141 spare_malloc 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:37.141 spare_delay 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:37.141 [2024-11-20 13:52:39.951474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:37.141 [2024-11-20 13:52:39.951755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:37.141 [2024-11-20 13:52:39.951840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:32:37.141 [2024-11-20 13:52:39.951872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:37.141 [2024-11-20 13:52:39.954826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:37.141 [2024-11-20 13:52:39.955053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:37.141 spare 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:37.141 [2024-11-20 13:52:39.959544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:37.141 [2024-11-20 13:52:39.962137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:37.141 [2024-11-20 13:52:39.962421] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:37.141 [2024-11-20 13:52:39.962449] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:37.141 [2024-11-20 13:52:39.962551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:32:37.141 [2024-11-20 13:52:39.962771] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:37.141 [2024-11-20 13:52:39.962792] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:37.141 [2024-11-20 13:52:39.962950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:37.141 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:37.142 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:37.142 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:37.142 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:37.142 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:37.142 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.142 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:37.142 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:37.142 13:52:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.142 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:37.142 "name": "raid_bdev1", 00:32:37.142 "uuid": "c34feffb-3841-4419-872f-ef7f0c6b0147", 00:32:37.142 "strip_size_kb": 0, 00:32:37.142 "state": "online", 00:32:37.142 "raid_level": "raid1", 00:32:37.142 "superblock": true, 00:32:37.142 "num_base_bdevs": 2, 00:32:37.142 "num_base_bdevs_discovered": 2, 00:32:37.142 "num_base_bdevs_operational": 2, 00:32:37.142 "base_bdevs_list": [ 00:32:37.142 { 00:32:37.142 "name": "BaseBdev1", 00:32:37.142 "uuid": "f09e180e-5c30-5efe-8b08-0e080d3fb04c", 00:32:37.142 "is_configured": true, 00:32:37.142 "data_offset": 256, 00:32:37.142 "data_size": 7936 00:32:37.142 }, 00:32:37.142 { 00:32:37.142 "name": "BaseBdev2", 00:32:37.142 "uuid": "e77d3dee-319c-555f-a988-75d0988b1df9", 00:32:37.142 "is_configured": true, 00:32:37.142 "data_offset": 256, 00:32:37.142 "data_size": 7936 00:32:37.142 } 00:32:37.142 ] 00:32:37.142 }' 00:32:37.142 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:37.142 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:37.709 [2024-11-20 13:52:40.480060] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:37.709 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:37.968 [2024-11-20 13:52:40.867852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:32:37.968 /dev/nbd0 00:32:38.227 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:38.227 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:38.227 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:32:38.227 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:32:38.227 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:38.227 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:38.227 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:32:38.227 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:32:38.227 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:38.227 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:38.227 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:38.227 1+0 records in 00:32:38.227 1+0 records out 00:32:38.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347124 s, 11.8 MB/s 00:32:38.227 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:38.227 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:32:38.227 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:38.227 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:38.227 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:32:38.227 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:38.227 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:38.227 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:32:38.227 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:32:38.227 13:52:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:32:39.164 7936+0 records in 00:32:39.164 7936+0 records out 00:32:39.164 32505856 bytes (33 MB, 31 MiB) copied, 1.01974 s, 31.9 MB/s 00:32:39.164 13:52:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:32:39.164 13:52:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:39.164 13:52:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:39.164 13:52:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:39.164 13:52:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:32:39.164 13:52:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:39.164 13:52:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:39.424 [2024-11-20 13:52:42.244775] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:39.424 [2024-11-20 13:52:42.252893] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:39.424 "name": "raid_bdev1", 00:32:39.424 "uuid": "c34feffb-3841-4419-872f-ef7f0c6b0147", 00:32:39.424 "strip_size_kb": 0, 00:32:39.424 "state": "online", 00:32:39.424 "raid_level": "raid1", 00:32:39.424 "superblock": true, 00:32:39.424 "num_base_bdevs": 2, 00:32:39.424 "num_base_bdevs_discovered": 1, 00:32:39.424 "num_base_bdevs_operational": 1, 00:32:39.424 "base_bdevs_list": [ 00:32:39.424 { 00:32:39.424 "name": null, 00:32:39.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:39.424 "is_configured": false, 00:32:39.424 "data_offset": 0, 00:32:39.424 "data_size": 7936 00:32:39.424 }, 00:32:39.424 { 00:32:39.424 "name": "BaseBdev2", 00:32:39.424 "uuid": "e77d3dee-319c-555f-a988-75d0988b1df9", 00:32:39.424 "is_configured": true, 00:32:39.424 "data_offset": 256, 00:32:39.424 "data_size": 7936 00:32:39.424 } 00:32:39.424 ] 00:32:39.424 }' 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:39.424 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:39.991 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:39.991 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.991 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:39.991 [2024-11-20 13:52:42.765140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:39.991 [2024-11-20 13:52:42.778558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:32:39.991 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.991 13:52:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:32:39.991 [2024-11-20 13:52:42.781259] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:40.927 13:52:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:40.927 13:52:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:40.927 13:52:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:40.927 13:52:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:40.927 13:52:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:40.927 13:52:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:40.927 13:52:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.927 13:52:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:40.927 13:52:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:40.927 13:52:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.186 13:52:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:41.186 "name": "raid_bdev1", 00:32:41.187 "uuid": "c34feffb-3841-4419-872f-ef7f0c6b0147", 00:32:41.187 "strip_size_kb": 0, 00:32:41.187 "state": "online", 00:32:41.187 "raid_level": "raid1", 00:32:41.187 "superblock": true, 00:32:41.187 "num_base_bdevs": 2, 00:32:41.187 "num_base_bdevs_discovered": 2, 00:32:41.187 "num_base_bdevs_operational": 2, 00:32:41.187 "process": { 00:32:41.187 "type": "rebuild", 00:32:41.187 "target": "spare", 00:32:41.187 "progress": { 00:32:41.187 "blocks": 2560, 00:32:41.187 "percent": 32 00:32:41.187 } 00:32:41.187 }, 00:32:41.187 "base_bdevs_list": [ 00:32:41.187 { 00:32:41.187 "name": "spare", 00:32:41.187 "uuid": "3ef222dc-924a-5b60-b0e0-6955f1da9907", 00:32:41.187 "is_configured": true, 00:32:41.187 "data_offset": 256, 00:32:41.187 "data_size": 7936 00:32:41.187 }, 00:32:41.187 { 00:32:41.187 "name": "BaseBdev2", 00:32:41.187 "uuid": "e77d3dee-319c-555f-a988-75d0988b1df9", 00:32:41.187 "is_configured": true, 00:32:41.187 "data_offset": 256, 00:32:41.187 "data_size": 7936 00:32:41.187 } 00:32:41.187 ] 00:32:41.187 }' 00:32:41.187 13:52:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:41.187 13:52:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:41.187 13:52:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:41.187 13:52:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:41.187 13:52:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:32:41.187 13:52:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.187 13:52:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:41.187 [2024-11-20 13:52:43.946503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:41.187 [2024-11-20 13:52:43.990529] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:41.187 [2024-11-20 13:52:43.990799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:41.187 [2024-11-20 13:52:43.990838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:41.187 [2024-11-20 13:52:43.990858] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:41.187 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.187 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:41.187 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:41.187 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:41.187 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:41.187 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:41.187 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:41.187 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:41.187 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:41.187 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:41.187 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:41.187 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:41.187 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:41.187 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.187 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:41.187 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.187 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:41.187 "name": "raid_bdev1", 00:32:41.187 "uuid": "c34feffb-3841-4419-872f-ef7f0c6b0147", 00:32:41.187 "strip_size_kb": 0, 00:32:41.187 "state": "online", 00:32:41.187 "raid_level": "raid1", 00:32:41.187 "superblock": true, 00:32:41.187 "num_base_bdevs": 2, 00:32:41.187 "num_base_bdevs_discovered": 1, 00:32:41.187 "num_base_bdevs_operational": 1, 00:32:41.187 "base_bdevs_list": [ 00:32:41.187 { 00:32:41.187 "name": null, 00:32:41.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:41.187 "is_configured": false, 00:32:41.187 "data_offset": 0, 00:32:41.187 "data_size": 7936 00:32:41.187 }, 00:32:41.187 { 00:32:41.187 "name": "BaseBdev2", 00:32:41.187 "uuid": "e77d3dee-319c-555f-a988-75d0988b1df9", 00:32:41.187 "is_configured": true, 00:32:41.187 "data_offset": 256, 00:32:41.187 "data_size": 7936 00:32:41.187 } 00:32:41.187 ] 00:32:41.187 }' 00:32:41.187 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:41.187 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:41.755 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:41.755 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:41.755 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:41.755 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:41.755 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:41.755 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:41.755 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:41.755 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.755 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:41.755 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.755 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:41.755 "name": "raid_bdev1", 00:32:41.755 "uuid": "c34feffb-3841-4419-872f-ef7f0c6b0147", 00:32:41.755 "strip_size_kb": 0, 00:32:41.755 "state": "online", 00:32:41.755 "raid_level": "raid1", 00:32:41.755 "superblock": true, 00:32:41.755 "num_base_bdevs": 2, 00:32:41.755 "num_base_bdevs_discovered": 1, 00:32:41.755 "num_base_bdevs_operational": 1, 00:32:41.755 "base_bdevs_list": [ 00:32:41.755 { 00:32:41.755 "name": null, 00:32:41.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:41.755 "is_configured": false, 00:32:41.755 "data_offset": 0, 00:32:41.755 "data_size": 7936 00:32:41.755 }, 00:32:41.755 { 00:32:41.755 "name": "BaseBdev2", 00:32:41.755 "uuid": "e77d3dee-319c-555f-a988-75d0988b1df9", 00:32:41.755 "is_configured": true, 00:32:41.755 "data_offset": 256, 00:32:41.755 "data_size": 7936 00:32:41.755 } 00:32:41.755 ] 00:32:41.755 }' 00:32:41.755 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:42.013 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:42.014 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:42.014 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:42.014 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:42.014 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.014 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:42.014 [2024-11-20 13:52:44.749908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:42.014 [2024-11-20 13:52:44.763084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:32:42.014 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.014 13:52:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:32:42.014 [2024-11-20 13:52:44.765907] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:42.950 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:42.950 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:42.950 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:42.950 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:42.950 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:42.950 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:42.950 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:42.950 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.950 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:42.950 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.950 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:42.950 "name": "raid_bdev1", 00:32:42.950 "uuid": "c34feffb-3841-4419-872f-ef7f0c6b0147", 00:32:42.950 "strip_size_kb": 0, 00:32:42.950 "state": "online", 00:32:42.950 "raid_level": "raid1", 00:32:42.950 "superblock": true, 00:32:42.950 "num_base_bdevs": 2, 00:32:42.950 "num_base_bdevs_discovered": 2, 00:32:42.950 "num_base_bdevs_operational": 2, 00:32:42.950 "process": { 00:32:42.950 "type": "rebuild", 00:32:42.950 "target": "spare", 00:32:42.950 "progress": { 00:32:42.950 "blocks": 2560, 00:32:42.950 "percent": 32 00:32:42.950 } 00:32:42.950 }, 00:32:42.950 "base_bdevs_list": [ 00:32:42.950 { 00:32:42.950 "name": "spare", 00:32:42.950 "uuid": "3ef222dc-924a-5b60-b0e0-6955f1da9907", 00:32:42.950 "is_configured": true, 00:32:42.950 "data_offset": 256, 00:32:42.950 "data_size": 7936 00:32:42.950 }, 00:32:42.950 { 00:32:42.950 "name": "BaseBdev2", 00:32:42.950 "uuid": "e77d3dee-319c-555f-a988-75d0988b1df9", 00:32:42.950 "is_configured": true, 00:32:42.950 "data_offset": 256, 00:32:42.950 "data_size": 7936 00:32:42.950 } 00:32:42.950 ] 00:32:42.950 }' 00:32:42.950 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:43.209 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:43.209 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:43.209 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:43.209 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:32:43.209 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:32:43.209 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:32:43.209 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:32:43.209 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:32:43.209 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:32:43.209 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=776 00:32:43.209 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:43.209 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:43.209 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:43.209 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:43.209 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:43.209 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:43.209 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:43.209 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:43.209 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.209 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:43.209 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.209 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:43.209 "name": "raid_bdev1", 00:32:43.209 "uuid": "c34feffb-3841-4419-872f-ef7f0c6b0147", 00:32:43.209 "strip_size_kb": 0, 00:32:43.209 "state": "online", 00:32:43.209 "raid_level": "raid1", 00:32:43.209 "superblock": true, 00:32:43.209 "num_base_bdevs": 2, 00:32:43.209 "num_base_bdevs_discovered": 2, 00:32:43.209 "num_base_bdevs_operational": 2, 00:32:43.209 "process": { 00:32:43.209 "type": "rebuild", 00:32:43.209 "target": "spare", 00:32:43.209 "progress": { 00:32:43.209 "blocks": 2816, 00:32:43.209 "percent": 35 00:32:43.209 } 00:32:43.209 }, 00:32:43.209 "base_bdevs_list": [ 00:32:43.209 { 00:32:43.209 "name": "spare", 00:32:43.209 "uuid": "3ef222dc-924a-5b60-b0e0-6955f1da9907", 00:32:43.209 "is_configured": true, 00:32:43.209 "data_offset": 256, 00:32:43.209 "data_size": 7936 00:32:43.209 }, 00:32:43.209 { 00:32:43.209 "name": "BaseBdev2", 00:32:43.209 "uuid": "e77d3dee-319c-555f-a988-75d0988b1df9", 00:32:43.209 "is_configured": true, 00:32:43.209 "data_offset": 256, 00:32:43.209 "data_size": 7936 00:32:43.209 } 00:32:43.209 ] 00:32:43.209 }' 00:32:43.209 13:52:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:43.209 13:52:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:43.209 13:52:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:43.209 13:52:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:43.209 13:52:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:44.585 13:52:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:44.585 13:52:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:44.585 13:52:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:44.585 13:52:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:44.585 13:52:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:44.585 13:52:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:44.585 13:52:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:44.585 13:52:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.585 13:52:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:44.585 13:52:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:44.585 13:52:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.585 13:52:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:44.585 "name": "raid_bdev1", 00:32:44.585 "uuid": "c34feffb-3841-4419-872f-ef7f0c6b0147", 00:32:44.585 "strip_size_kb": 0, 00:32:44.585 "state": "online", 00:32:44.585 "raid_level": "raid1", 00:32:44.585 "superblock": true, 00:32:44.585 "num_base_bdevs": 2, 00:32:44.585 "num_base_bdevs_discovered": 2, 00:32:44.585 "num_base_bdevs_operational": 2, 00:32:44.585 "process": { 00:32:44.585 "type": "rebuild", 00:32:44.585 "target": "spare", 00:32:44.585 "progress": { 00:32:44.585 "blocks": 5888, 00:32:44.585 "percent": 74 00:32:44.585 } 00:32:44.585 }, 00:32:44.585 "base_bdevs_list": [ 00:32:44.585 { 00:32:44.585 "name": "spare", 00:32:44.585 "uuid": "3ef222dc-924a-5b60-b0e0-6955f1da9907", 00:32:44.585 "is_configured": true, 00:32:44.585 "data_offset": 256, 00:32:44.585 "data_size": 7936 00:32:44.585 }, 00:32:44.585 { 00:32:44.585 "name": "BaseBdev2", 00:32:44.585 "uuid": "e77d3dee-319c-555f-a988-75d0988b1df9", 00:32:44.585 "is_configured": true, 00:32:44.585 "data_offset": 256, 00:32:44.585 "data_size": 7936 00:32:44.585 } 00:32:44.585 ] 00:32:44.585 }' 00:32:44.585 13:52:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:44.585 13:52:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:44.585 13:52:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:44.585 13:52:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:44.585 13:52:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:45.152 [2024-11-20 13:52:47.889014] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:45.152 [2024-11-20 13:52:47.889131] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:45.152 [2024-11-20 13:52:47.889277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:45.411 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:45.411 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:45.411 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:45.411 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:45.411 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:45.411 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:45.411 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:45.411 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.411 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:45.411 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:45.411 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.670 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:45.670 "name": "raid_bdev1", 00:32:45.670 "uuid": "c34feffb-3841-4419-872f-ef7f0c6b0147", 00:32:45.670 "strip_size_kb": 0, 00:32:45.670 "state": "online", 00:32:45.671 "raid_level": "raid1", 00:32:45.671 "superblock": true, 00:32:45.671 "num_base_bdevs": 2, 00:32:45.671 "num_base_bdevs_discovered": 2, 00:32:45.671 "num_base_bdevs_operational": 2, 00:32:45.671 "base_bdevs_list": [ 00:32:45.671 { 00:32:45.671 "name": "spare", 00:32:45.671 "uuid": "3ef222dc-924a-5b60-b0e0-6955f1da9907", 00:32:45.671 "is_configured": true, 00:32:45.671 "data_offset": 256, 00:32:45.671 "data_size": 7936 00:32:45.671 }, 00:32:45.671 { 00:32:45.671 "name": "BaseBdev2", 00:32:45.671 "uuid": "e77d3dee-319c-555f-a988-75d0988b1df9", 00:32:45.671 "is_configured": true, 00:32:45.671 "data_offset": 256, 00:32:45.671 "data_size": 7936 00:32:45.671 } 00:32:45.671 ] 00:32:45.671 }' 00:32:45.671 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:45.671 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:45.671 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:45.671 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:32:45.671 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:32:45.671 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:45.671 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:45.671 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:45.671 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:45.671 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:45.671 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:45.671 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:45.671 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.671 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:45.671 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.671 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:45.671 "name": "raid_bdev1", 00:32:45.671 "uuid": "c34feffb-3841-4419-872f-ef7f0c6b0147", 00:32:45.671 "strip_size_kb": 0, 00:32:45.671 "state": "online", 00:32:45.671 "raid_level": "raid1", 00:32:45.671 "superblock": true, 00:32:45.671 "num_base_bdevs": 2, 00:32:45.671 "num_base_bdevs_discovered": 2, 00:32:45.671 "num_base_bdevs_operational": 2, 00:32:45.671 "base_bdevs_list": [ 00:32:45.671 { 00:32:45.671 "name": "spare", 00:32:45.671 "uuid": "3ef222dc-924a-5b60-b0e0-6955f1da9907", 00:32:45.671 "is_configured": true, 00:32:45.671 "data_offset": 256, 00:32:45.671 "data_size": 7936 00:32:45.671 }, 00:32:45.671 { 00:32:45.671 "name": "BaseBdev2", 00:32:45.671 "uuid": "e77d3dee-319c-555f-a988-75d0988b1df9", 00:32:45.671 "is_configured": true, 00:32:45.671 "data_offset": 256, 00:32:45.671 "data_size": 7936 00:32:45.671 } 00:32:45.671 ] 00:32:45.671 }' 00:32:45.671 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:45.671 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:45.671 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:45.929 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:45.929 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:45.929 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:45.929 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:45.929 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:45.929 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:45.929 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:45.929 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:45.929 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:45.929 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:45.929 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:45.929 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:45.929 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.929 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:45.929 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:45.929 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.929 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:45.929 "name": "raid_bdev1", 00:32:45.929 "uuid": "c34feffb-3841-4419-872f-ef7f0c6b0147", 00:32:45.929 "strip_size_kb": 0, 00:32:45.929 "state": "online", 00:32:45.929 "raid_level": "raid1", 00:32:45.929 "superblock": true, 00:32:45.929 "num_base_bdevs": 2, 00:32:45.929 "num_base_bdevs_discovered": 2, 00:32:45.929 "num_base_bdevs_operational": 2, 00:32:45.929 "base_bdevs_list": [ 00:32:45.929 { 00:32:45.929 "name": "spare", 00:32:45.929 "uuid": "3ef222dc-924a-5b60-b0e0-6955f1da9907", 00:32:45.929 "is_configured": true, 00:32:45.929 "data_offset": 256, 00:32:45.929 "data_size": 7936 00:32:45.929 }, 00:32:45.929 { 00:32:45.929 "name": "BaseBdev2", 00:32:45.929 "uuid": "e77d3dee-319c-555f-a988-75d0988b1df9", 00:32:45.929 "is_configured": true, 00:32:45.929 "data_offset": 256, 00:32:45.929 "data_size": 7936 00:32:45.929 } 00:32:45.929 ] 00:32:45.929 }' 00:32:45.929 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:45.929 13:52:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:46.503 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:46.503 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.503 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:46.503 [2024-11-20 13:52:49.110879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:46.503 [2024-11-20 13:52:49.111264] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:46.503 [2024-11-20 13:52:49.111409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:46.503 [2024-11-20 13:52:49.111520] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:46.503 [2024-11-20 13:52:49.111539] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:46.503 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.503 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:32:46.503 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:46.503 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.503 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:46.504 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.504 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:32:46.504 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:32:46.504 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:32:46.504 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:46.504 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:46.504 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:32:46.504 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:46.504 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:46.504 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:46.504 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:32:46.504 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:46.504 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:46.504 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:46.766 /dev/nbd0 00:32:46.766 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:46.766 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:46.767 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:32:46.767 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:32:46.767 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:46.767 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:46.767 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:32:46.767 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:32:46.767 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:46.767 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:46.767 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:46.767 1+0 records in 00:32:46.767 1+0 records out 00:32:46.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241183 s, 17.0 MB/s 00:32:46.767 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:46.767 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:32:46.767 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:46.767 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:46.767 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:32:46.767 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:46.767 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:46.767 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:32:47.025 /dev/nbd1 00:32:47.025 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:47.025 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:47.025 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:32:47.025 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:32:47.025 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:47.026 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:47.026 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:32:47.026 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:32:47.026 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:47.026 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:47.026 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:47.026 1+0 records in 00:32:47.026 1+0 records out 00:32:47.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410342 s, 10.0 MB/s 00:32:47.026 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:47.026 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:32:47.026 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:47.026 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:47.026 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:32:47.026 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:47.026 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:47.026 13:52:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:32:47.284 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:32:47.284 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:47.284 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:47.284 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:47.284 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:32:47.284 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:47.284 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:47.543 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:47.543 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:47.543 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:47.543 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:47.543 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:47.543 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:47.543 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:32:47.543 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:32:47.543 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:47.543 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:47.802 [2024-11-20 13:52:50.579752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:47.802 [2024-11-20 13:52:50.579834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:47.802 [2024-11-20 13:52:50.579872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:47.802 [2024-11-20 13:52:50.579926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:47.802 [2024-11-20 13:52:50.582631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:47.802 [2024-11-20 13:52:50.582695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:47.802 [2024-11-20 13:52:50.582813] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:47.802 [2024-11-20 13:52:50.582881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:47.802 [2024-11-20 13:52:50.583137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:47.802 spare 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:47.802 [2024-11-20 13:52:50.683280] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:32:47.802 [2024-11-20 13:52:50.683339] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:47.802 [2024-11-20 13:52:50.683458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:32:47.802 [2024-11-20 13:52:50.683703] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:32:47.802 [2024-11-20 13:52:50.683736] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:32:47.802 [2024-11-20 13:52:50.683940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:47.802 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.062 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:48.062 "name": "raid_bdev1", 00:32:48.062 "uuid": "c34feffb-3841-4419-872f-ef7f0c6b0147", 00:32:48.062 "strip_size_kb": 0, 00:32:48.062 "state": "online", 00:32:48.062 "raid_level": "raid1", 00:32:48.062 "superblock": true, 00:32:48.062 "num_base_bdevs": 2, 00:32:48.062 "num_base_bdevs_discovered": 2, 00:32:48.062 "num_base_bdevs_operational": 2, 00:32:48.062 "base_bdevs_list": [ 00:32:48.062 { 00:32:48.062 "name": "spare", 00:32:48.062 "uuid": "3ef222dc-924a-5b60-b0e0-6955f1da9907", 00:32:48.062 "is_configured": true, 00:32:48.062 "data_offset": 256, 00:32:48.062 "data_size": 7936 00:32:48.062 }, 00:32:48.062 { 00:32:48.062 "name": "BaseBdev2", 00:32:48.062 "uuid": "e77d3dee-319c-555f-a988-75d0988b1df9", 00:32:48.062 "is_configured": true, 00:32:48.062 "data_offset": 256, 00:32:48.062 "data_size": 7936 00:32:48.062 } 00:32:48.062 ] 00:32:48.062 }' 00:32:48.062 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:48.062 13:52:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:48.322 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:48.322 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:48.322 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:48.322 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:48.322 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:48.322 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:48.322 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:48.322 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.322 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:48.581 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.581 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:48.581 "name": "raid_bdev1", 00:32:48.582 "uuid": "c34feffb-3841-4419-872f-ef7f0c6b0147", 00:32:48.582 "strip_size_kb": 0, 00:32:48.582 "state": "online", 00:32:48.582 "raid_level": "raid1", 00:32:48.582 "superblock": true, 00:32:48.582 "num_base_bdevs": 2, 00:32:48.582 "num_base_bdevs_discovered": 2, 00:32:48.582 "num_base_bdevs_operational": 2, 00:32:48.582 "base_bdevs_list": [ 00:32:48.582 { 00:32:48.582 "name": "spare", 00:32:48.582 "uuid": "3ef222dc-924a-5b60-b0e0-6955f1da9907", 00:32:48.582 "is_configured": true, 00:32:48.582 "data_offset": 256, 00:32:48.582 "data_size": 7936 00:32:48.582 }, 00:32:48.582 { 00:32:48.582 "name": "BaseBdev2", 00:32:48.582 "uuid": "e77d3dee-319c-555f-a988-75d0988b1df9", 00:32:48.582 "is_configured": true, 00:32:48.582 "data_offset": 256, 00:32:48.582 "data_size": 7936 00:32:48.582 } 00:32:48.582 ] 00:32:48.582 }' 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:48.582 [2024-11-20 13:52:51.412258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:48.582 "name": "raid_bdev1", 00:32:48.582 "uuid": "c34feffb-3841-4419-872f-ef7f0c6b0147", 00:32:48.582 "strip_size_kb": 0, 00:32:48.582 "state": "online", 00:32:48.582 "raid_level": "raid1", 00:32:48.582 "superblock": true, 00:32:48.582 "num_base_bdevs": 2, 00:32:48.582 "num_base_bdevs_discovered": 1, 00:32:48.582 "num_base_bdevs_operational": 1, 00:32:48.582 "base_bdevs_list": [ 00:32:48.582 { 00:32:48.582 "name": null, 00:32:48.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:48.582 "is_configured": false, 00:32:48.582 "data_offset": 0, 00:32:48.582 "data_size": 7936 00:32:48.582 }, 00:32:48.582 { 00:32:48.582 "name": "BaseBdev2", 00:32:48.582 "uuid": "e77d3dee-319c-555f-a988-75d0988b1df9", 00:32:48.582 "is_configured": true, 00:32:48.582 "data_offset": 256, 00:32:48.582 "data_size": 7936 00:32:48.582 } 00:32:48.582 ] 00:32:48.582 }' 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:48.582 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:49.150 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:49.150 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.150 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:49.150 [2024-11-20 13:52:51.928869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:49.150 [2024-11-20 13:52:51.929151] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:49.150 [2024-11-20 13:52:51.929196] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:49.150 [2024-11-20 13:52:51.929268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:49.150 [2024-11-20 13:52:51.942483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:32:49.150 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.150 13:52:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:32:49.150 [2024-11-20 13:52:51.945196] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:50.087 13:52:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:50.087 13:52:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:50.087 13:52:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:50.087 13:52:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:50.088 13:52:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:50.088 13:52:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:50.088 13:52:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.088 13:52:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:50.088 13:52:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:50.088 13:52:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.346 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:50.346 "name": "raid_bdev1", 00:32:50.346 "uuid": "c34feffb-3841-4419-872f-ef7f0c6b0147", 00:32:50.346 "strip_size_kb": 0, 00:32:50.347 "state": "online", 00:32:50.347 "raid_level": "raid1", 00:32:50.347 "superblock": true, 00:32:50.347 "num_base_bdevs": 2, 00:32:50.347 "num_base_bdevs_discovered": 2, 00:32:50.347 "num_base_bdevs_operational": 2, 00:32:50.347 "process": { 00:32:50.347 "type": "rebuild", 00:32:50.347 "target": "spare", 00:32:50.347 "progress": { 00:32:50.347 "blocks": 2560, 00:32:50.347 "percent": 32 00:32:50.347 } 00:32:50.347 }, 00:32:50.347 "base_bdevs_list": [ 00:32:50.347 { 00:32:50.347 "name": "spare", 00:32:50.347 "uuid": "3ef222dc-924a-5b60-b0e0-6955f1da9907", 00:32:50.347 "is_configured": true, 00:32:50.347 "data_offset": 256, 00:32:50.347 "data_size": 7936 00:32:50.347 }, 00:32:50.347 { 00:32:50.347 "name": "BaseBdev2", 00:32:50.347 "uuid": "e77d3dee-319c-555f-a988-75d0988b1df9", 00:32:50.347 "is_configured": true, 00:32:50.347 "data_offset": 256, 00:32:50.347 "data_size": 7936 00:32:50.347 } 00:32:50.347 ] 00:32:50.347 }' 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:50.347 [2024-11-20 13:52:53.118947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:50.347 [2024-11-20 13:52:53.154031] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:50.347 [2024-11-20 13:52:53.154134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:50.347 [2024-11-20 13:52:53.154158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:50.347 [2024-11-20 13:52:53.154193] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:50.347 "name": "raid_bdev1", 00:32:50.347 "uuid": "c34feffb-3841-4419-872f-ef7f0c6b0147", 00:32:50.347 "strip_size_kb": 0, 00:32:50.347 "state": "online", 00:32:50.347 "raid_level": "raid1", 00:32:50.347 "superblock": true, 00:32:50.347 "num_base_bdevs": 2, 00:32:50.347 "num_base_bdevs_discovered": 1, 00:32:50.347 "num_base_bdevs_operational": 1, 00:32:50.347 "base_bdevs_list": [ 00:32:50.347 { 00:32:50.347 "name": null, 00:32:50.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:50.347 "is_configured": false, 00:32:50.347 "data_offset": 0, 00:32:50.347 "data_size": 7936 00:32:50.347 }, 00:32:50.347 { 00:32:50.347 "name": "BaseBdev2", 00:32:50.347 "uuid": "e77d3dee-319c-555f-a988-75d0988b1df9", 00:32:50.347 "is_configured": true, 00:32:50.347 "data_offset": 256, 00:32:50.347 "data_size": 7936 00:32:50.347 } 00:32:50.347 ] 00:32:50.347 }' 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:50.347 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:50.915 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:50.915 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.915 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:50.915 [2024-11-20 13:52:53.700684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:50.915 [2024-11-20 13:52:53.700804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:50.915 [2024-11-20 13:52:53.700844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:32:50.915 [2024-11-20 13:52:53.700866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:50.915 [2024-11-20 13:52:53.701347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:50.915 [2024-11-20 13:52:53.701425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:50.915 [2024-11-20 13:52:53.701546] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:50.915 [2024-11-20 13:52:53.701574] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:50.915 [2024-11-20 13:52:53.701598] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:50.915 [2024-11-20 13:52:53.701643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:50.915 [2024-11-20 13:52:53.713670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:32:50.915 spare 00:32:50.915 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.915 13:52:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:32:50.915 [2024-11-20 13:52:53.716335] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:51.851 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:51.851 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:51.851 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:51.851 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:51.851 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:51.851 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:51.851 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.851 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:51.851 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:51.852 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.110 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:52.110 "name": "raid_bdev1", 00:32:52.110 "uuid": "c34feffb-3841-4419-872f-ef7f0c6b0147", 00:32:52.110 "strip_size_kb": 0, 00:32:52.110 "state": "online", 00:32:52.111 "raid_level": "raid1", 00:32:52.111 "superblock": true, 00:32:52.111 "num_base_bdevs": 2, 00:32:52.111 "num_base_bdevs_discovered": 2, 00:32:52.111 "num_base_bdevs_operational": 2, 00:32:52.111 "process": { 00:32:52.111 "type": "rebuild", 00:32:52.111 "target": "spare", 00:32:52.111 "progress": { 00:32:52.111 "blocks": 2560, 00:32:52.111 "percent": 32 00:32:52.111 } 00:32:52.111 }, 00:32:52.111 "base_bdevs_list": [ 00:32:52.111 { 00:32:52.111 "name": "spare", 00:32:52.111 "uuid": "3ef222dc-924a-5b60-b0e0-6955f1da9907", 00:32:52.111 "is_configured": true, 00:32:52.111 "data_offset": 256, 00:32:52.111 "data_size": 7936 00:32:52.111 }, 00:32:52.111 { 00:32:52.111 "name": "BaseBdev2", 00:32:52.111 "uuid": "e77d3dee-319c-555f-a988-75d0988b1df9", 00:32:52.111 "is_configured": true, 00:32:52.111 "data_offset": 256, 00:32:52.111 "data_size": 7936 00:32:52.111 } 00:32:52.111 ] 00:32:52.111 }' 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:52.111 [2024-11-20 13:52:54.890130] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:52.111 [2024-11-20 13:52:54.925532] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:52.111 [2024-11-20 13:52:54.925640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:52.111 [2024-11-20 13:52:54.925670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:52.111 [2024-11-20 13:52:54.925682] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:52.111 "name": "raid_bdev1", 00:32:52.111 "uuid": "c34feffb-3841-4419-872f-ef7f0c6b0147", 00:32:52.111 "strip_size_kb": 0, 00:32:52.111 "state": "online", 00:32:52.111 "raid_level": "raid1", 00:32:52.111 "superblock": true, 00:32:52.111 "num_base_bdevs": 2, 00:32:52.111 "num_base_bdevs_discovered": 1, 00:32:52.111 "num_base_bdevs_operational": 1, 00:32:52.111 "base_bdevs_list": [ 00:32:52.111 { 00:32:52.111 "name": null, 00:32:52.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:52.111 "is_configured": false, 00:32:52.111 "data_offset": 0, 00:32:52.111 "data_size": 7936 00:32:52.111 }, 00:32:52.111 { 00:32:52.111 "name": "BaseBdev2", 00:32:52.111 "uuid": "e77d3dee-319c-555f-a988-75d0988b1df9", 00:32:52.111 "is_configured": true, 00:32:52.111 "data_offset": 256, 00:32:52.111 "data_size": 7936 00:32:52.111 } 00:32:52.111 ] 00:32:52.111 }' 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:52.111 13:52:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:52.678 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:52.678 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:52.678 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:52.678 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:52.678 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:52.678 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:52.678 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.678 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:52.678 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:52.678 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.678 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:52.678 "name": "raid_bdev1", 00:32:52.678 "uuid": "c34feffb-3841-4419-872f-ef7f0c6b0147", 00:32:52.678 "strip_size_kb": 0, 00:32:52.678 "state": "online", 00:32:52.678 "raid_level": "raid1", 00:32:52.678 "superblock": true, 00:32:52.678 "num_base_bdevs": 2, 00:32:52.678 "num_base_bdevs_discovered": 1, 00:32:52.678 "num_base_bdevs_operational": 1, 00:32:52.678 "base_bdevs_list": [ 00:32:52.678 { 00:32:52.678 "name": null, 00:32:52.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:52.678 "is_configured": false, 00:32:52.678 "data_offset": 0, 00:32:52.678 "data_size": 7936 00:32:52.678 }, 00:32:52.678 { 00:32:52.678 "name": "BaseBdev2", 00:32:52.678 "uuid": "e77d3dee-319c-555f-a988-75d0988b1df9", 00:32:52.678 "is_configured": true, 00:32:52.678 "data_offset": 256, 00:32:52.678 "data_size": 7936 00:32:52.678 } 00:32:52.678 ] 00:32:52.678 }' 00:32:52.678 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:52.678 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:52.678 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:52.937 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:52.937 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:32:52.937 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.937 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:52.937 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.938 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:52.938 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.938 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:52.938 [2024-11-20 13:52:55.631754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:52.938 [2024-11-20 13:52:55.631829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:52.938 [2024-11-20 13:52:55.631866] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:32:52.938 [2024-11-20 13:52:55.631883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:52.938 [2024-11-20 13:52:55.632254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:52.938 [2024-11-20 13:52:55.632290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:52.938 [2024-11-20 13:52:55.632381] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:52.938 [2024-11-20 13:52:55.632404] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:52.938 [2024-11-20 13:52:55.632420] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:52.938 [2024-11-20 13:52:55.632435] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:32:52.938 BaseBdev1 00:32:52.938 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.938 13:52:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:32:53.876 13:52:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:53.876 13:52:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:53.876 13:52:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:53.876 13:52:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:53.876 13:52:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:53.876 13:52:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:53.876 13:52:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:53.876 13:52:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:53.876 13:52:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:53.876 13:52:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:53.876 13:52:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:53.876 13:52:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:53.876 13:52:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.876 13:52:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:53.876 13:52:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.876 13:52:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:53.876 "name": "raid_bdev1", 00:32:53.876 "uuid": "c34feffb-3841-4419-872f-ef7f0c6b0147", 00:32:53.876 "strip_size_kb": 0, 00:32:53.876 "state": "online", 00:32:53.876 "raid_level": "raid1", 00:32:53.876 "superblock": true, 00:32:53.876 "num_base_bdevs": 2, 00:32:53.876 "num_base_bdevs_discovered": 1, 00:32:53.876 "num_base_bdevs_operational": 1, 00:32:53.876 "base_bdevs_list": [ 00:32:53.876 { 00:32:53.876 "name": null, 00:32:53.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:53.876 "is_configured": false, 00:32:53.876 "data_offset": 0, 00:32:53.876 "data_size": 7936 00:32:53.876 }, 00:32:53.876 { 00:32:53.876 "name": "BaseBdev2", 00:32:53.876 "uuid": "e77d3dee-319c-555f-a988-75d0988b1df9", 00:32:53.876 "is_configured": true, 00:32:53.876 "data_offset": 256, 00:32:53.876 "data_size": 7936 00:32:53.876 } 00:32:53.876 ] 00:32:53.876 }' 00:32:53.876 13:52:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:53.876 13:52:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:54.445 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:54.445 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:54.446 "name": "raid_bdev1", 00:32:54.446 "uuid": "c34feffb-3841-4419-872f-ef7f0c6b0147", 00:32:54.446 "strip_size_kb": 0, 00:32:54.446 "state": "online", 00:32:54.446 "raid_level": "raid1", 00:32:54.446 "superblock": true, 00:32:54.446 "num_base_bdevs": 2, 00:32:54.446 "num_base_bdevs_discovered": 1, 00:32:54.446 "num_base_bdevs_operational": 1, 00:32:54.446 "base_bdevs_list": [ 00:32:54.446 { 00:32:54.446 "name": null, 00:32:54.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:54.446 "is_configured": false, 00:32:54.446 "data_offset": 0, 00:32:54.446 "data_size": 7936 00:32:54.446 }, 00:32:54.446 { 00:32:54.446 "name": "BaseBdev2", 00:32:54.446 "uuid": "e77d3dee-319c-555f-a988-75d0988b1df9", 00:32:54.446 "is_configured": true, 00:32:54.446 "data_offset": 256, 00:32:54.446 "data_size": 7936 00:32:54.446 } 00:32:54.446 ] 00:32:54.446 }' 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:54.446 [2024-11-20 13:52:57.316413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:54.446 [2024-11-20 13:52:57.316669] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:54.446 [2024-11-20 13:52:57.316707] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:54.446 request: 00:32:54.446 { 00:32:54.446 "base_bdev": "BaseBdev1", 00:32:54.446 "raid_bdev": "raid_bdev1", 00:32:54.446 "method": "bdev_raid_add_base_bdev", 00:32:54.446 "req_id": 1 00:32:54.446 } 00:32:54.446 Got JSON-RPC error response 00:32:54.446 response: 00:32:54.446 { 00:32:54.446 "code": -22, 00:32:54.446 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:32:54.446 } 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:54.446 13:52:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:32:55.827 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:55.827 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:55.827 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:55.827 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:55.827 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:55.827 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:55.827 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:55.827 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:55.827 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:55.827 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:55.827 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:55.827 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:55.827 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.827 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:55.827 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.827 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:55.827 "name": "raid_bdev1", 00:32:55.827 "uuid": "c34feffb-3841-4419-872f-ef7f0c6b0147", 00:32:55.827 "strip_size_kb": 0, 00:32:55.827 "state": "online", 00:32:55.827 "raid_level": "raid1", 00:32:55.827 "superblock": true, 00:32:55.827 "num_base_bdevs": 2, 00:32:55.827 "num_base_bdevs_discovered": 1, 00:32:55.827 "num_base_bdevs_operational": 1, 00:32:55.827 "base_bdevs_list": [ 00:32:55.827 { 00:32:55.827 "name": null, 00:32:55.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:55.827 "is_configured": false, 00:32:55.827 "data_offset": 0, 00:32:55.827 "data_size": 7936 00:32:55.827 }, 00:32:55.827 { 00:32:55.827 "name": "BaseBdev2", 00:32:55.827 "uuid": "e77d3dee-319c-555f-a988-75d0988b1df9", 00:32:55.827 "is_configured": true, 00:32:55.827 "data_offset": 256, 00:32:55.827 "data_size": 7936 00:32:55.827 } 00:32:55.827 ] 00:32:55.827 }' 00:32:55.827 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:55.827 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:56.094 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:56.094 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:56.094 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:56.094 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:56.094 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:56.094 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:56.094 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.094 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:56.094 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:56.094 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.094 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:56.094 "name": "raid_bdev1", 00:32:56.094 "uuid": "c34feffb-3841-4419-872f-ef7f0c6b0147", 00:32:56.094 "strip_size_kb": 0, 00:32:56.094 "state": "online", 00:32:56.094 "raid_level": "raid1", 00:32:56.094 "superblock": true, 00:32:56.094 "num_base_bdevs": 2, 00:32:56.094 "num_base_bdevs_discovered": 1, 00:32:56.094 "num_base_bdevs_operational": 1, 00:32:56.094 "base_bdevs_list": [ 00:32:56.094 { 00:32:56.094 "name": null, 00:32:56.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:56.094 "is_configured": false, 00:32:56.094 "data_offset": 0, 00:32:56.094 "data_size": 7936 00:32:56.094 }, 00:32:56.094 { 00:32:56.094 "name": "BaseBdev2", 00:32:56.094 "uuid": "e77d3dee-319c-555f-a988-75d0988b1df9", 00:32:56.094 "is_configured": true, 00:32:56.094 "data_offset": 256, 00:32:56.094 "data_size": 7936 00:32:56.094 } 00:32:56.094 ] 00:32:56.094 }' 00:32:56.094 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:56.094 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:56.094 13:52:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:56.362 13:52:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:56.362 13:52:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88456 00:32:56.362 13:52:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88456 ']' 00:32:56.362 13:52:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88456 00:32:56.362 13:52:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:32:56.362 13:52:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:56.362 13:52:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88456 00:32:56.362 13:52:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:56.362 13:52:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:56.362 killing process with pid 88456 00:32:56.362 13:52:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88456' 00:32:56.362 13:52:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88456 00:32:56.362 Received shutdown signal, test time was about 60.000000 seconds 00:32:56.362 00:32:56.362 Latency(us) 00:32:56.362 [2024-11-20T13:52:59.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:56.362 [2024-11-20T13:52:59.279Z] =================================================================================================================== 00:32:56.362 [2024-11-20T13:52:59.279Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:56.362 [2024-11-20 13:52:59.049185] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:56.362 13:52:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88456 00:32:56.362 [2024-11-20 13:52:59.049430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:56.362 [2024-11-20 13:52:59.049512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:56.362 [2024-11-20 13:52:59.049541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:32:56.625 [2024-11-20 13:52:59.320689] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:57.561 13:53:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:32:57.561 00:32:57.561 real 0m21.657s 00:32:57.561 user 0m29.314s 00:32:57.561 sys 0m2.593s 00:32:57.561 13:53:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:57.561 ************************************ 00:32:57.561 13:53:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:57.561 END TEST raid_rebuild_test_sb_md_separate 00:32:57.561 ************************************ 00:32:57.561 13:53:00 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:32:57.561 13:53:00 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:32:57.561 13:53:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:32:57.561 13:53:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:57.561 13:53:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:57.561 ************************************ 00:32:57.561 START TEST raid_state_function_test_sb_md_interleaved 00:32:57.561 ************************************ 00:32:57.561 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:32:57.561 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:32:57.561 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:32:57.561 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:32:57.561 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:32:57.561 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:32:57.561 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:57.561 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:32:57.561 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:57.561 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:57.561 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:32:57.561 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:57.561 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:57.561 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:57.561 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:32:57.561 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:32:57.561 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:32:57.561 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:32:57.561 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:32:57.561 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:32:57.561 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:32:57.561 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:32:57.561 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:32:57.562 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89159 00:32:57.562 Process raid pid: 89159 00:32:57.562 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89159' 00:32:57.562 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:57.562 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89159 00:32:57.562 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89159 ']' 00:32:57.562 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:57.562 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:57.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:57.562 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:57.562 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:57.562 13:53:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:57.820 [2024-11-20 13:53:00.497326] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:32:57.820 [2024-11-20 13:53:00.497526] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:57.820 [2024-11-20 13:53:00.678013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.079 [2024-11-20 13:53:00.800580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.337 [2024-11-20 13:53:00.997770] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:58.337 [2024-11-20 13:53:00.997850] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:58.595 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:58.595 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:32:58.595 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:58.595 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.595 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:58.595 [2024-11-20 13:53:01.448983] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:58.595 [2024-11-20 13:53:01.449082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:58.595 [2024-11-20 13:53:01.449102] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:58.595 [2024-11-20 13:53:01.449139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:58.595 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.595 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:58.596 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:58.596 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:58.596 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:58.596 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:58.596 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:58.596 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:58.596 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:58.596 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:58.596 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:58.596 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:58.596 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:58.596 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.596 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:58.596 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.596 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:58.596 "name": "Existed_Raid", 00:32:58.596 "uuid": "193f5484-7109-4023-97c7-8c5dbafb1164", 00:32:58.596 "strip_size_kb": 0, 00:32:58.596 "state": "configuring", 00:32:58.596 "raid_level": "raid1", 00:32:58.596 "superblock": true, 00:32:58.596 "num_base_bdevs": 2, 00:32:58.596 "num_base_bdevs_discovered": 0, 00:32:58.596 "num_base_bdevs_operational": 2, 00:32:58.596 "base_bdevs_list": [ 00:32:58.596 { 00:32:58.596 "name": "BaseBdev1", 00:32:58.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:58.596 "is_configured": false, 00:32:58.596 "data_offset": 0, 00:32:58.596 "data_size": 0 00:32:58.596 }, 00:32:58.596 { 00:32:58.596 "name": "BaseBdev2", 00:32:58.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:58.596 "is_configured": false, 00:32:58.596 "data_offset": 0, 00:32:58.596 "data_size": 0 00:32:58.596 } 00:32:58.596 ] 00:32:58.596 }' 00:32:58.596 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:58.596 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:59.163 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:59.163 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.163 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:59.163 [2024-11-20 13:53:01.953082] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:59.163 [2024-11-20 13:53:01.953160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:32:59.163 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.163 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:59.163 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.163 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:59.163 [2024-11-20 13:53:01.961045] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:59.163 [2024-11-20 13:53:01.961110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:59.163 [2024-11-20 13:53:01.961128] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:59.163 [2024-11-20 13:53:01.961166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:59.163 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.163 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:32:59.163 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.163 13:53:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:59.163 [2024-11-20 13:53:02.004231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:59.163 BaseBdev1 00:32:59.163 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.163 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:32:59.163 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:32:59.163 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:59.163 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:32:59.163 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:59.163 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:59.163 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:32:59.163 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.163 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:59.163 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.163 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:59.163 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.163 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:59.163 [ 00:32:59.163 { 00:32:59.163 "name": "BaseBdev1", 00:32:59.163 "aliases": [ 00:32:59.163 "4c61694b-cf48-406c-a0e5-d4510266b91a" 00:32:59.163 ], 00:32:59.163 "product_name": "Malloc disk", 00:32:59.163 "block_size": 4128, 00:32:59.163 "num_blocks": 8192, 00:32:59.163 "uuid": "4c61694b-cf48-406c-a0e5-d4510266b91a", 00:32:59.163 "md_size": 32, 00:32:59.163 "md_interleave": true, 00:32:59.163 "dif_type": 0, 00:32:59.163 "assigned_rate_limits": { 00:32:59.163 "rw_ios_per_sec": 0, 00:32:59.163 "rw_mbytes_per_sec": 0, 00:32:59.163 "r_mbytes_per_sec": 0, 00:32:59.163 "w_mbytes_per_sec": 0 00:32:59.163 }, 00:32:59.163 "claimed": true, 00:32:59.163 "claim_type": "exclusive_write", 00:32:59.163 "zoned": false, 00:32:59.163 "supported_io_types": { 00:32:59.163 "read": true, 00:32:59.163 "write": true, 00:32:59.163 "unmap": true, 00:32:59.163 "flush": true, 00:32:59.163 "reset": true, 00:32:59.164 "nvme_admin": false, 00:32:59.164 "nvme_io": false, 00:32:59.164 "nvme_io_md": false, 00:32:59.164 "write_zeroes": true, 00:32:59.164 "zcopy": true, 00:32:59.164 "get_zone_info": false, 00:32:59.164 "zone_management": false, 00:32:59.164 "zone_append": false, 00:32:59.164 "compare": false, 00:32:59.164 "compare_and_write": false, 00:32:59.164 "abort": true, 00:32:59.164 "seek_hole": false, 00:32:59.164 "seek_data": false, 00:32:59.164 "copy": true, 00:32:59.164 "nvme_iov_md": false 00:32:59.164 }, 00:32:59.164 "memory_domains": [ 00:32:59.164 { 00:32:59.164 "dma_device_id": "system", 00:32:59.164 "dma_device_type": 1 00:32:59.164 }, 00:32:59.164 { 00:32:59.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:59.164 "dma_device_type": 2 00:32:59.164 } 00:32:59.164 ], 00:32:59.164 "driver_specific": {} 00:32:59.164 } 00:32:59.164 ] 00:32:59.164 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.164 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:32:59.164 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:59.164 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:59.164 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:59.164 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:59.164 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:59.164 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:59.164 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:59.164 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:59.164 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:59.164 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:59.164 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:59.164 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.164 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:59.164 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:59.164 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.423 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:59.423 "name": "Existed_Raid", 00:32:59.423 "uuid": "4a37a798-1b74-47ae-a9ba-b9be9a729dde", 00:32:59.423 "strip_size_kb": 0, 00:32:59.423 "state": "configuring", 00:32:59.423 "raid_level": "raid1", 00:32:59.423 "superblock": true, 00:32:59.423 "num_base_bdevs": 2, 00:32:59.423 "num_base_bdevs_discovered": 1, 00:32:59.423 "num_base_bdevs_operational": 2, 00:32:59.423 "base_bdevs_list": [ 00:32:59.423 { 00:32:59.423 "name": "BaseBdev1", 00:32:59.423 "uuid": "4c61694b-cf48-406c-a0e5-d4510266b91a", 00:32:59.423 "is_configured": true, 00:32:59.423 "data_offset": 256, 00:32:59.423 "data_size": 7936 00:32:59.423 }, 00:32:59.423 { 00:32:59.423 "name": "BaseBdev2", 00:32:59.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.423 "is_configured": false, 00:32:59.423 "data_offset": 0, 00:32:59.423 "data_size": 0 00:32:59.423 } 00:32:59.423 ] 00:32:59.423 }' 00:32:59.423 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:59.423 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:59.681 [2024-11-20 13:53:02.548516] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:59.681 [2024-11-20 13:53:02.548588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:59.681 [2024-11-20 13:53:02.556557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:59.681 [2024-11-20 13:53:02.559182] bdev.c:8685:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:59.681 [2024-11-20 13:53:02.559266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:59.681 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.941 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:59.941 "name": "Existed_Raid", 00:32:59.941 "uuid": "a0ade3cc-8857-44a0-9545-1404fd1b9e7d", 00:32:59.941 "strip_size_kb": 0, 00:32:59.941 "state": "configuring", 00:32:59.941 "raid_level": "raid1", 00:32:59.941 "superblock": true, 00:32:59.941 "num_base_bdevs": 2, 00:32:59.941 "num_base_bdevs_discovered": 1, 00:32:59.941 "num_base_bdevs_operational": 2, 00:32:59.941 "base_bdevs_list": [ 00:32:59.941 { 00:32:59.941 "name": "BaseBdev1", 00:32:59.941 "uuid": "4c61694b-cf48-406c-a0e5-d4510266b91a", 00:32:59.941 "is_configured": true, 00:32:59.941 "data_offset": 256, 00:32:59.941 "data_size": 7936 00:32:59.941 }, 00:32:59.941 { 00:32:59.941 "name": "BaseBdev2", 00:32:59.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.941 "is_configured": false, 00:32:59.941 "data_offset": 0, 00:32:59.941 "data_size": 0 00:32:59.941 } 00:32:59.941 ] 00:32:59.941 }' 00:32:59.941 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:59.941 13:53:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:00.200 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:33:00.200 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.200 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:00.459 [2024-11-20 13:53:03.117509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:00.459 [2024-11-20 13:53:03.117841] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:00.459 [2024-11-20 13:53:03.117868] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:00.459 [2024-11-20 13:53:03.118024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:33:00.459 [2024-11-20 13:53:03.118148] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:00.459 [2024-11-20 13:53:03.118171] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:33:00.459 BaseBdev2 00:33:00.459 [2024-11-20 13:53:03.118292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:00.459 [ 00:33:00.459 { 00:33:00.459 "name": "BaseBdev2", 00:33:00.459 "aliases": [ 00:33:00.459 "2b10a9b8-9539-4a21-af22-9ecace514ad7" 00:33:00.459 ], 00:33:00.459 "product_name": "Malloc disk", 00:33:00.459 "block_size": 4128, 00:33:00.459 "num_blocks": 8192, 00:33:00.459 "uuid": "2b10a9b8-9539-4a21-af22-9ecace514ad7", 00:33:00.459 "md_size": 32, 00:33:00.459 "md_interleave": true, 00:33:00.459 "dif_type": 0, 00:33:00.459 "assigned_rate_limits": { 00:33:00.459 "rw_ios_per_sec": 0, 00:33:00.459 "rw_mbytes_per_sec": 0, 00:33:00.459 "r_mbytes_per_sec": 0, 00:33:00.459 "w_mbytes_per_sec": 0 00:33:00.459 }, 00:33:00.459 "claimed": true, 00:33:00.459 "claim_type": "exclusive_write", 00:33:00.459 "zoned": false, 00:33:00.459 "supported_io_types": { 00:33:00.459 "read": true, 00:33:00.459 "write": true, 00:33:00.459 "unmap": true, 00:33:00.459 "flush": true, 00:33:00.459 "reset": true, 00:33:00.459 "nvme_admin": false, 00:33:00.459 "nvme_io": false, 00:33:00.459 "nvme_io_md": false, 00:33:00.459 "write_zeroes": true, 00:33:00.459 "zcopy": true, 00:33:00.459 "get_zone_info": false, 00:33:00.459 "zone_management": false, 00:33:00.459 "zone_append": false, 00:33:00.459 "compare": false, 00:33:00.459 "compare_and_write": false, 00:33:00.459 "abort": true, 00:33:00.459 "seek_hole": false, 00:33:00.459 "seek_data": false, 00:33:00.459 "copy": true, 00:33:00.459 "nvme_iov_md": false 00:33:00.459 }, 00:33:00.459 "memory_domains": [ 00:33:00.459 { 00:33:00.459 "dma_device_id": "system", 00:33:00.459 "dma_device_type": 1 00:33:00.459 }, 00:33:00.459 { 00:33:00.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:00.459 "dma_device_type": 2 00:33:00.459 } 00:33:00.459 ], 00:33:00.459 "driver_specific": {} 00:33:00.459 } 00:33:00.459 ] 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:00.459 "name": "Existed_Raid", 00:33:00.459 "uuid": "a0ade3cc-8857-44a0-9545-1404fd1b9e7d", 00:33:00.459 "strip_size_kb": 0, 00:33:00.459 "state": "online", 00:33:00.459 "raid_level": "raid1", 00:33:00.459 "superblock": true, 00:33:00.459 "num_base_bdevs": 2, 00:33:00.459 "num_base_bdevs_discovered": 2, 00:33:00.459 "num_base_bdevs_operational": 2, 00:33:00.459 "base_bdevs_list": [ 00:33:00.459 { 00:33:00.459 "name": "BaseBdev1", 00:33:00.459 "uuid": "4c61694b-cf48-406c-a0e5-d4510266b91a", 00:33:00.459 "is_configured": true, 00:33:00.459 "data_offset": 256, 00:33:00.459 "data_size": 7936 00:33:00.459 }, 00:33:00.459 { 00:33:00.459 "name": "BaseBdev2", 00:33:00.459 "uuid": "2b10a9b8-9539-4a21-af22-9ecace514ad7", 00:33:00.459 "is_configured": true, 00:33:00.459 "data_offset": 256, 00:33:00.459 "data_size": 7936 00:33:00.459 } 00:33:00.459 ] 00:33:00.459 }' 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:00.459 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:01.026 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:33:01.026 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:01.026 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:01.027 [2024-11-20 13:53:03.710259] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:01.027 "name": "Existed_Raid", 00:33:01.027 "aliases": [ 00:33:01.027 "a0ade3cc-8857-44a0-9545-1404fd1b9e7d" 00:33:01.027 ], 00:33:01.027 "product_name": "Raid Volume", 00:33:01.027 "block_size": 4128, 00:33:01.027 "num_blocks": 7936, 00:33:01.027 "uuid": "a0ade3cc-8857-44a0-9545-1404fd1b9e7d", 00:33:01.027 "md_size": 32, 00:33:01.027 "md_interleave": true, 00:33:01.027 "dif_type": 0, 00:33:01.027 "assigned_rate_limits": { 00:33:01.027 "rw_ios_per_sec": 0, 00:33:01.027 "rw_mbytes_per_sec": 0, 00:33:01.027 "r_mbytes_per_sec": 0, 00:33:01.027 "w_mbytes_per_sec": 0 00:33:01.027 }, 00:33:01.027 "claimed": false, 00:33:01.027 "zoned": false, 00:33:01.027 "supported_io_types": { 00:33:01.027 "read": true, 00:33:01.027 "write": true, 00:33:01.027 "unmap": false, 00:33:01.027 "flush": false, 00:33:01.027 "reset": true, 00:33:01.027 "nvme_admin": false, 00:33:01.027 "nvme_io": false, 00:33:01.027 "nvme_io_md": false, 00:33:01.027 "write_zeroes": true, 00:33:01.027 "zcopy": false, 00:33:01.027 "get_zone_info": false, 00:33:01.027 "zone_management": false, 00:33:01.027 "zone_append": false, 00:33:01.027 "compare": false, 00:33:01.027 "compare_and_write": false, 00:33:01.027 "abort": false, 00:33:01.027 "seek_hole": false, 00:33:01.027 "seek_data": false, 00:33:01.027 "copy": false, 00:33:01.027 "nvme_iov_md": false 00:33:01.027 }, 00:33:01.027 "memory_domains": [ 00:33:01.027 { 00:33:01.027 "dma_device_id": "system", 00:33:01.027 "dma_device_type": 1 00:33:01.027 }, 00:33:01.027 { 00:33:01.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:01.027 "dma_device_type": 2 00:33:01.027 }, 00:33:01.027 { 00:33:01.027 "dma_device_id": "system", 00:33:01.027 "dma_device_type": 1 00:33:01.027 }, 00:33:01.027 { 00:33:01.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:01.027 "dma_device_type": 2 00:33:01.027 } 00:33:01.027 ], 00:33:01.027 "driver_specific": { 00:33:01.027 "raid": { 00:33:01.027 "uuid": "a0ade3cc-8857-44a0-9545-1404fd1b9e7d", 00:33:01.027 "strip_size_kb": 0, 00:33:01.027 "state": "online", 00:33:01.027 "raid_level": "raid1", 00:33:01.027 "superblock": true, 00:33:01.027 "num_base_bdevs": 2, 00:33:01.027 "num_base_bdevs_discovered": 2, 00:33:01.027 "num_base_bdevs_operational": 2, 00:33:01.027 "base_bdevs_list": [ 00:33:01.027 { 00:33:01.027 "name": "BaseBdev1", 00:33:01.027 "uuid": "4c61694b-cf48-406c-a0e5-d4510266b91a", 00:33:01.027 "is_configured": true, 00:33:01.027 "data_offset": 256, 00:33:01.027 "data_size": 7936 00:33:01.027 }, 00:33:01.027 { 00:33:01.027 "name": "BaseBdev2", 00:33:01.027 "uuid": "2b10a9b8-9539-4a21-af22-9ecace514ad7", 00:33:01.027 "is_configured": true, 00:33:01.027 "data_offset": 256, 00:33:01.027 "data_size": 7936 00:33:01.027 } 00:33:01.027 ] 00:33:01.027 } 00:33:01.027 } 00:33:01.027 }' 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:33:01.027 BaseBdev2' 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:01.027 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:01.286 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.286 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:33:01.286 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:33:01.286 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:01.286 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.286 13:53:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:01.286 [2024-11-20 13:53:03.985866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:01.286 "name": "Existed_Raid", 00:33:01.286 "uuid": "a0ade3cc-8857-44a0-9545-1404fd1b9e7d", 00:33:01.286 "strip_size_kb": 0, 00:33:01.286 "state": "online", 00:33:01.286 "raid_level": "raid1", 00:33:01.286 "superblock": true, 00:33:01.286 "num_base_bdevs": 2, 00:33:01.286 "num_base_bdevs_discovered": 1, 00:33:01.286 "num_base_bdevs_operational": 1, 00:33:01.286 "base_bdevs_list": [ 00:33:01.286 { 00:33:01.286 "name": null, 00:33:01.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:01.286 "is_configured": false, 00:33:01.286 "data_offset": 0, 00:33:01.286 "data_size": 7936 00:33:01.286 }, 00:33:01.286 { 00:33:01.286 "name": "BaseBdev2", 00:33:01.286 "uuid": "2b10a9b8-9539-4a21-af22-9ecace514ad7", 00:33:01.286 "is_configured": true, 00:33:01.286 "data_offset": 256, 00:33:01.286 "data_size": 7936 00:33:01.286 } 00:33:01.286 ] 00:33:01.286 }' 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:01.286 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:01.853 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:33:01.853 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:01.853 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:01.853 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:01.853 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.853 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:01.853 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.853 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:01.853 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:01.853 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:33:01.853 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.853 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:01.853 [2024-11-20 13:53:04.672340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:01.853 [2024-11-20 13:53:04.672701] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:01.853 [2024-11-20 13:53:04.750425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:01.853 [2024-11-20 13:53:04.750496] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:01.853 [2024-11-20 13:53:04.750519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:33:01.853 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.853 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:01.853 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:01.853 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:01.853 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.853 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:01.853 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:33:01.853 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.111 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:33:02.111 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:33:02.111 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:33:02.111 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89159 00:33:02.111 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89159 ']' 00:33:02.111 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89159 00:33:02.111 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:33:02.111 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:02.111 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89159 00:33:02.111 killing process with pid 89159 00:33:02.111 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:02.111 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:02.111 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89159' 00:33:02.111 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89159 00:33:02.111 [2024-11-20 13:53:04.842810] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:02.111 13:53:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89159 00:33:02.111 [2024-11-20 13:53:04.857389] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:03.102 13:53:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:33:03.102 00:33:03.102 real 0m5.476s 00:33:03.102 user 0m8.306s 00:33:03.102 sys 0m0.831s 00:33:03.102 13:53:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:03.102 13:53:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:03.102 ************************************ 00:33:03.102 END TEST raid_state_function_test_sb_md_interleaved 00:33:03.102 ************************************ 00:33:03.102 13:53:05 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:33:03.102 13:53:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:03.102 13:53:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:03.102 13:53:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:03.102 ************************************ 00:33:03.102 START TEST raid_superblock_test_md_interleaved 00:33:03.102 ************************************ 00:33:03.102 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:33:03.102 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:33:03.102 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:33:03.102 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:33:03.102 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:33:03.102 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:33:03.102 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:33:03.102 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:33:03.102 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:33:03.102 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:33:03.102 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:33:03.102 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:33:03.103 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:33:03.103 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:33:03.103 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:33:03.103 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:33:03.103 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89417 00:33:03.103 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89417 00:33:03.103 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:33:03.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.103 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89417 ']' 00:33:03.103 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.103 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:03.103 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.103 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:03.103 13:53:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:03.362 [2024-11-20 13:53:06.036119] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:33:03.362 [2024-11-20 13:53:06.036302] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89417 ] 00:33:03.362 [2024-11-20 13:53:06.220756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.620 [2024-11-20 13:53:06.345966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.878 [2024-11-20 13:53:06.543415] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:03.878 [2024-11-20 13:53:06.543498] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:04.446 malloc1 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:04.446 [2024-11-20 13:53:07.130343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:04.446 [2024-11-20 13:53:07.130436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:04.446 [2024-11-20 13:53:07.130473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:04.446 [2024-11-20 13:53:07.130491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:04.446 [2024-11-20 13:53:07.132849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:04.446 [2024-11-20 13:53:07.132906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:04.446 pt1 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:04.446 malloc2 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:04.446 [2024-11-20 13:53:07.182678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:04.446 [2024-11-20 13:53:07.182758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:04.446 [2024-11-20 13:53:07.182792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:04.446 [2024-11-20 13:53:07.182809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:04.446 [2024-11-20 13:53:07.185102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:04.446 [2024-11-20 13:53:07.185159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:04.446 pt2 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:04.446 [2024-11-20 13:53:07.194728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:04.446 [2024-11-20 13:53:07.197338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:04.446 [2024-11-20 13:53:07.197575] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:33:04.446 [2024-11-20 13:53:07.197595] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:04.446 [2024-11-20 13:53:07.197690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:33:04.446 [2024-11-20 13:53:07.197795] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:33:04.446 [2024-11-20 13:53:07.197817] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:33:04.446 [2024-11-20 13:53:07.197938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:04.446 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:04.447 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:04.447 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.447 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:04.447 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.447 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:04.447 "name": "raid_bdev1", 00:33:04.447 "uuid": "92cc2ec9-7dae-469c-b2f6-56f21bc88ae2", 00:33:04.447 "strip_size_kb": 0, 00:33:04.447 "state": "online", 00:33:04.447 "raid_level": "raid1", 00:33:04.447 "superblock": true, 00:33:04.447 "num_base_bdevs": 2, 00:33:04.447 "num_base_bdevs_discovered": 2, 00:33:04.447 "num_base_bdevs_operational": 2, 00:33:04.447 "base_bdevs_list": [ 00:33:04.447 { 00:33:04.447 "name": "pt1", 00:33:04.447 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:04.447 "is_configured": true, 00:33:04.447 "data_offset": 256, 00:33:04.447 "data_size": 7936 00:33:04.447 }, 00:33:04.447 { 00:33:04.447 "name": "pt2", 00:33:04.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:04.447 "is_configured": true, 00:33:04.447 "data_offset": 256, 00:33:04.447 "data_size": 7936 00:33:04.447 } 00:33:04.447 ] 00:33:04.447 }' 00:33:04.447 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:04.447 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:05.015 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:33:05.015 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:33:05.015 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:05.015 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:05.015 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:33:05.015 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:05.015 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:05.015 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.015 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:05.015 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:05.015 [2024-11-20 13:53:07.711235] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:05.015 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.015 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:05.015 "name": "raid_bdev1", 00:33:05.015 "aliases": [ 00:33:05.015 "92cc2ec9-7dae-469c-b2f6-56f21bc88ae2" 00:33:05.015 ], 00:33:05.015 "product_name": "Raid Volume", 00:33:05.015 "block_size": 4128, 00:33:05.015 "num_blocks": 7936, 00:33:05.015 "uuid": "92cc2ec9-7dae-469c-b2f6-56f21bc88ae2", 00:33:05.015 "md_size": 32, 00:33:05.015 "md_interleave": true, 00:33:05.015 "dif_type": 0, 00:33:05.015 "assigned_rate_limits": { 00:33:05.015 "rw_ios_per_sec": 0, 00:33:05.015 "rw_mbytes_per_sec": 0, 00:33:05.015 "r_mbytes_per_sec": 0, 00:33:05.015 "w_mbytes_per_sec": 0 00:33:05.015 }, 00:33:05.015 "claimed": false, 00:33:05.015 "zoned": false, 00:33:05.015 "supported_io_types": { 00:33:05.015 "read": true, 00:33:05.015 "write": true, 00:33:05.015 "unmap": false, 00:33:05.015 "flush": false, 00:33:05.015 "reset": true, 00:33:05.015 "nvme_admin": false, 00:33:05.015 "nvme_io": false, 00:33:05.015 "nvme_io_md": false, 00:33:05.015 "write_zeroes": true, 00:33:05.015 "zcopy": false, 00:33:05.015 "get_zone_info": false, 00:33:05.015 "zone_management": false, 00:33:05.015 "zone_append": false, 00:33:05.015 "compare": false, 00:33:05.015 "compare_and_write": false, 00:33:05.015 "abort": false, 00:33:05.015 "seek_hole": false, 00:33:05.015 "seek_data": false, 00:33:05.015 "copy": false, 00:33:05.015 "nvme_iov_md": false 00:33:05.015 }, 00:33:05.015 "memory_domains": [ 00:33:05.015 { 00:33:05.015 "dma_device_id": "system", 00:33:05.015 "dma_device_type": 1 00:33:05.015 }, 00:33:05.015 { 00:33:05.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:05.015 "dma_device_type": 2 00:33:05.015 }, 00:33:05.015 { 00:33:05.015 "dma_device_id": "system", 00:33:05.015 "dma_device_type": 1 00:33:05.015 }, 00:33:05.015 { 00:33:05.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:05.015 "dma_device_type": 2 00:33:05.015 } 00:33:05.015 ], 00:33:05.015 "driver_specific": { 00:33:05.015 "raid": { 00:33:05.015 "uuid": "92cc2ec9-7dae-469c-b2f6-56f21bc88ae2", 00:33:05.015 "strip_size_kb": 0, 00:33:05.015 "state": "online", 00:33:05.015 "raid_level": "raid1", 00:33:05.015 "superblock": true, 00:33:05.015 "num_base_bdevs": 2, 00:33:05.015 "num_base_bdevs_discovered": 2, 00:33:05.015 "num_base_bdevs_operational": 2, 00:33:05.015 "base_bdevs_list": [ 00:33:05.015 { 00:33:05.015 "name": "pt1", 00:33:05.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:05.015 "is_configured": true, 00:33:05.015 "data_offset": 256, 00:33:05.015 "data_size": 7936 00:33:05.015 }, 00:33:05.015 { 00:33:05.015 "name": "pt2", 00:33:05.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:05.015 "is_configured": true, 00:33:05.015 "data_offset": 256, 00:33:05.015 "data_size": 7936 00:33:05.015 } 00:33:05.015 ] 00:33:05.015 } 00:33:05.015 } 00:33:05.015 }' 00:33:05.015 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:05.015 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:33:05.015 pt2' 00:33:05.015 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:05.015 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:33:05.015 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:05.015 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:33:05.016 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.016 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:05.016 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:05.016 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.016 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:33:05.016 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:33:05.016 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:05.016 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:33:05.016 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.016 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:05.016 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:05.275 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.275 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:33:05.275 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:33:05.275 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:33:05.275 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:05.275 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.275 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:05.275 [2024-11-20 13:53:07.979180] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:05.275 13:53:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.275 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=92cc2ec9-7dae-469c-b2f6-56f21bc88ae2 00:33:05.275 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 92cc2ec9-7dae-469c-b2f6-56f21bc88ae2 ']' 00:33:05.275 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:05.275 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.275 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:05.275 [2024-11-20 13:53:08.026874] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:05.275 [2024-11-20 13:53:08.026999] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:05.275 [2024-11-20 13:53:08.027132] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:05.275 [2024-11-20 13:53:08.027220] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:05.275 [2024-11-20 13:53:08.027253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:05.275 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.275 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:05.275 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:33:05.275 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.275 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:05.275 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.275 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:33:05.275 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:33:05.275 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:05.275 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:33:05.275 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.275 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:05.275 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.275 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:05.276 [2024-11-20 13:53:08.166919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:33:05.276 [2024-11-20 13:53:08.169262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:33:05.276 [2024-11-20 13:53:08.169553] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:33:05.276 [2024-11-20 13:53:08.169647] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:33:05.276 [2024-11-20 13:53:08.169677] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:05.276 [2024-11-20 13:53:08.169696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:33:05.276 request: 00:33:05.276 { 00:33:05.276 "name": "raid_bdev1", 00:33:05.276 "raid_level": "raid1", 00:33:05.276 "base_bdevs": [ 00:33:05.276 "malloc1", 00:33:05.276 "malloc2" 00:33:05.276 ], 00:33:05.276 "superblock": false, 00:33:05.276 "method": "bdev_raid_create", 00:33:05.276 "req_id": 1 00:33:05.276 } 00:33:05.276 Got JSON-RPC error response 00:33:05.276 response: 00:33:05.276 { 00:33:05.276 "code": -17, 00:33:05.276 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:33:05.276 } 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:05.276 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:05.535 [2024-11-20 13:53:08.242982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:05.535 [2024-11-20 13:53:08.243376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:05.535 [2024-11-20 13:53:08.243449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:33:05.535 [2024-11-20 13:53:08.243652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:05.535 [2024-11-20 13:53:08.246195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:05.535 [2024-11-20 13:53:08.246399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:05.535 [2024-11-20 13:53:08.246613] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:05.535 [2024-11-20 13:53:08.246738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:05.535 pt1 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:05.535 "name": "raid_bdev1", 00:33:05.535 "uuid": "92cc2ec9-7dae-469c-b2f6-56f21bc88ae2", 00:33:05.535 "strip_size_kb": 0, 00:33:05.535 "state": "configuring", 00:33:05.535 "raid_level": "raid1", 00:33:05.535 "superblock": true, 00:33:05.535 "num_base_bdevs": 2, 00:33:05.535 "num_base_bdevs_discovered": 1, 00:33:05.535 "num_base_bdevs_operational": 2, 00:33:05.535 "base_bdevs_list": [ 00:33:05.535 { 00:33:05.535 "name": "pt1", 00:33:05.535 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:05.535 "is_configured": true, 00:33:05.535 "data_offset": 256, 00:33:05.535 "data_size": 7936 00:33:05.535 }, 00:33:05.535 { 00:33:05.535 "name": null, 00:33:05.535 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:05.535 "is_configured": false, 00:33:05.535 "data_offset": 256, 00:33:05.535 "data_size": 7936 00:33:05.535 } 00:33:05.535 ] 00:33:05.535 }' 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:05.535 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:06.103 [2024-11-20 13:53:08.779178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:06.103 [2024-11-20 13:53:08.779307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:06.103 [2024-11-20 13:53:08.779342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:06.103 [2024-11-20 13:53:08.779361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:06.103 [2024-11-20 13:53:08.779580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:06.103 [2024-11-20 13:53:08.779614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:06.103 [2024-11-20 13:53:08.779716] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:06.103 [2024-11-20 13:53:08.779758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:06.103 [2024-11-20 13:53:08.779877] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:06.103 [2024-11-20 13:53:08.779901] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:06.103 [2024-11-20 13:53:08.780454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:06.103 [2024-11-20 13:53:08.780706] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:06.103 [2024-11-20 13:53:08.780857] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:33:06.103 [2024-11-20 13:53:08.781093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:06.103 pt2 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.103 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:06.103 "name": "raid_bdev1", 00:33:06.103 "uuid": "92cc2ec9-7dae-469c-b2f6-56f21bc88ae2", 00:33:06.103 "strip_size_kb": 0, 00:33:06.103 "state": "online", 00:33:06.103 "raid_level": "raid1", 00:33:06.103 "superblock": true, 00:33:06.103 "num_base_bdevs": 2, 00:33:06.103 "num_base_bdevs_discovered": 2, 00:33:06.103 "num_base_bdevs_operational": 2, 00:33:06.103 "base_bdevs_list": [ 00:33:06.103 { 00:33:06.103 "name": "pt1", 00:33:06.103 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:06.103 "is_configured": true, 00:33:06.103 "data_offset": 256, 00:33:06.103 "data_size": 7936 00:33:06.103 }, 00:33:06.103 { 00:33:06.103 "name": "pt2", 00:33:06.103 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:06.103 "is_configured": true, 00:33:06.103 "data_offset": 256, 00:33:06.103 "data_size": 7936 00:33:06.103 } 00:33:06.103 ] 00:33:06.103 }' 00:33:06.104 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:06.104 13:53:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:06.670 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:33:06.670 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:33:06.670 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:06.670 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:06.670 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:33:06.670 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:06.670 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:06.670 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:06.670 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.670 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:06.670 [2024-11-20 13:53:09.299651] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:06.670 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.670 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:06.670 "name": "raid_bdev1", 00:33:06.670 "aliases": [ 00:33:06.670 "92cc2ec9-7dae-469c-b2f6-56f21bc88ae2" 00:33:06.670 ], 00:33:06.670 "product_name": "Raid Volume", 00:33:06.670 "block_size": 4128, 00:33:06.670 "num_blocks": 7936, 00:33:06.670 "uuid": "92cc2ec9-7dae-469c-b2f6-56f21bc88ae2", 00:33:06.670 "md_size": 32, 00:33:06.670 "md_interleave": true, 00:33:06.670 "dif_type": 0, 00:33:06.670 "assigned_rate_limits": { 00:33:06.670 "rw_ios_per_sec": 0, 00:33:06.670 "rw_mbytes_per_sec": 0, 00:33:06.670 "r_mbytes_per_sec": 0, 00:33:06.670 "w_mbytes_per_sec": 0 00:33:06.670 }, 00:33:06.670 "claimed": false, 00:33:06.670 "zoned": false, 00:33:06.670 "supported_io_types": { 00:33:06.670 "read": true, 00:33:06.670 "write": true, 00:33:06.670 "unmap": false, 00:33:06.670 "flush": false, 00:33:06.670 "reset": true, 00:33:06.670 "nvme_admin": false, 00:33:06.670 "nvme_io": false, 00:33:06.670 "nvme_io_md": false, 00:33:06.670 "write_zeroes": true, 00:33:06.670 "zcopy": false, 00:33:06.670 "get_zone_info": false, 00:33:06.670 "zone_management": false, 00:33:06.670 "zone_append": false, 00:33:06.670 "compare": false, 00:33:06.670 "compare_and_write": false, 00:33:06.671 "abort": false, 00:33:06.671 "seek_hole": false, 00:33:06.671 "seek_data": false, 00:33:06.671 "copy": false, 00:33:06.671 "nvme_iov_md": false 00:33:06.671 }, 00:33:06.671 "memory_domains": [ 00:33:06.671 { 00:33:06.671 "dma_device_id": "system", 00:33:06.671 "dma_device_type": 1 00:33:06.671 }, 00:33:06.671 { 00:33:06.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:06.671 "dma_device_type": 2 00:33:06.671 }, 00:33:06.671 { 00:33:06.671 "dma_device_id": "system", 00:33:06.671 "dma_device_type": 1 00:33:06.671 }, 00:33:06.671 { 00:33:06.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:06.671 "dma_device_type": 2 00:33:06.671 } 00:33:06.671 ], 00:33:06.671 "driver_specific": { 00:33:06.671 "raid": { 00:33:06.671 "uuid": "92cc2ec9-7dae-469c-b2f6-56f21bc88ae2", 00:33:06.671 "strip_size_kb": 0, 00:33:06.671 "state": "online", 00:33:06.671 "raid_level": "raid1", 00:33:06.671 "superblock": true, 00:33:06.671 "num_base_bdevs": 2, 00:33:06.671 "num_base_bdevs_discovered": 2, 00:33:06.671 "num_base_bdevs_operational": 2, 00:33:06.671 "base_bdevs_list": [ 00:33:06.671 { 00:33:06.671 "name": "pt1", 00:33:06.671 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:06.671 "is_configured": true, 00:33:06.671 "data_offset": 256, 00:33:06.671 "data_size": 7936 00:33:06.671 }, 00:33:06.671 { 00:33:06.671 "name": "pt2", 00:33:06.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:06.671 "is_configured": true, 00:33:06.671 "data_offset": 256, 00:33:06.671 "data_size": 7936 00:33:06.671 } 00:33:06.671 ] 00:33:06.671 } 00:33:06.671 } 00:33:06.671 }' 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:33:06.671 pt2' 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:33:06.671 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:06.671 [2024-11-20 13:53:09.583732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 92cc2ec9-7dae-469c-b2f6-56f21bc88ae2 '!=' 92cc2ec9-7dae-469c-b2f6-56f21bc88ae2 ']' 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:06.930 [2024-11-20 13:53:09.631460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:06.930 "name": "raid_bdev1", 00:33:06.930 "uuid": "92cc2ec9-7dae-469c-b2f6-56f21bc88ae2", 00:33:06.930 "strip_size_kb": 0, 00:33:06.930 "state": "online", 00:33:06.930 "raid_level": "raid1", 00:33:06.930 "superblock": true, 00:33:06.930 "num_base_bdevs": 2, 00:33:06.930 "num_base_bdevs_discovered": 1, 00:33:06.930 "num_base_bdevs_operational": 1, 00:33:06.930 "base_bdevs_list": [ 00:33:06.930 { 00:33:06.930 "name": null, 00:33:06.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:06.930 "is_configured": false, 00:33:06.930 "data_offset": 0, 00:33:06.930 "data_size": 7936 00:33:06.930 }, 00:33:06.930 { 00:33:06.930 "name": "pt2", 00:33:06.930 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:06.930 "is_configured": true, 00:33:06.930 "data_offset": 256, 00:33:06.930 "data_size": 7936 00:33:06.930 } 00:33:06.930 ] 00:33:06.930 }' 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:06.930 13:53:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:07.498 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:07.498 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.498 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:07.498 [2024-11-20 13:53:10.235595] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:07.498 [2024-11-20 13:53:10.235646] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:07.498 [2024-11-20 13:53:10.235782] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:07.498 [2024-11-20 13:53:10.235854] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:07.498 [2024-11-20 13:53:10.235879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:33:07.498 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.498 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:07.498 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:33:07.498 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.498 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:07.499 [2024-11-20 13:53:10.315604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:07.499 [2024-11-20 13:53:10.315721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:07.499 [2024-11-20 13:53:10.315750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:33:07.499 [2024-11-20 13:53:10.315771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:07.499 [2024-11-20 13:53:10.318331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:07.499 [2024-11-20 13:53:10.318381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:07.499 [2024-11-20 13:53:10.318459] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:07.499 [2024-11-20 13:53:10.318533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:07.499 [2024-11-20 13:53:10.318629] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:33:07.499 [2024-11-20 13:53:10.318653] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:07.499 [2024-11-20 13:53:10.318763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:07.499 [2024-11-20 13:53:10.318854] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:33:07.499 [2024-11-20 13:53:10.318869] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:33:07.499 [2024-11-20 13:53:10.318987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:07.499 pt2 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:07.499 "name": "raid_bdev1", 00:33:07.499 "uuid": "92cc2ec9-7dae-469c-b2f6-56f21bc88ae2", 00:33:07.499 "strip_size_kb": 0, 00:33:07.499 "state": "online", 00:33:07.499 "raid_level": "raid1", 00:33:07.499 "superblock": true, 00:33:07.499 "num_base_bdevs": 2, 00:33:07.499 "num_base_bdevs_discovered": 1, 00:33:07.499 "num_base_bdevs_operational": 1, 00:33:07.499 "base_bdevs_list": [ 00:33:07.499 { 00:33:07.499 "name": null, 00:33:07.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:07.499 "is_configured": false, 00:33:07.499 "data_offset": 256, 00:33:07.499 "data_size": 7936 00:33:07.499 }, 00:33:07.499 { 00:33:07.499 "name": "pt2", 00:33:07.499 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:07.499 "is_configured": true, 00:33:07.499 "data_offset": 256, 00:33:07.499 "data_size": 7936 00:33:07.499 } 00:33:07.499 ] 00:33:07.499 }' 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:07.499 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:08.067 [2024-11-20 13:53:10.819696] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:08.067 [2024-11-20 13:53:10.819741] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:08.067 [2024-11-20 13:53:10.819834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:08.067 [2024-11-20 13:53:10.819932] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:08.067 [2024-11-20 13:53:10.819969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:08.067 [2024-11-20 13:53:10.883734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:08.067 [2024-11-20 13:53:10.883802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:08.067 [2024-11-20 13:53:10.883834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:33:08.067 [2024-11-20 13:53:10.883851] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:08.067 [2024-11-20 13:53:10.886326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:08.067 [2024-11-20 13:53:10.886369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:08.067 [2024-11-20 13:53:10.886450] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:08.067 [2024-11-20 13:53:10.886507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:08.067 [2024-11-20 13:53:10.886633] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:33:08.067 [2024-11-20 13:53:10.886650] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:08.067 [2024-11-20 13:53:10.886672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:33:08.067 [2024-11-20 13:53:10.886736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:08.067 [2024-11-20 13:53:10.886849] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:33:08.067 [2024-11-20 13:53:10.886865] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:08.067 [2024-11-20 13:53:10.887013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:33:08.067 [2024-11-20 13:53:10.887094] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:33:08.067 [2024-11-20 13:53:10.887115] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:33:08.067 [2024-11-20 13:53:10.887211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:08.067 pt1 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:08.067 "name": "raid_bdev1", 00:33:08.067 "uuid": "92cc2ec9-7dae-469c-b2f6-56f21bc88ae2", 00:33:08.067 "strip_size_kb": 0, 00:33:08.067 "state": "online", 00:33:08.067 "raid_level": "raid1", 00:33:08.067 "superblock": true, 00:33:08.067 "num_base_bdevs": 2, 00:33:08.067 "num_base_bdevs_discovered": 1, 00:33:08.067 "num_base_bdevs_operational": 1, 00:33:08.067 "base_bdevs_list": [ 00:33:08.067 { 00:33:08.067 "name": null, 00:33:08.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:08.067 "is_configured": false, 00:33:08.067 "data_offset": 256, 00:33:08.067 "data_size": 7936 00:33:08.067 }, 00:33:08.067 { 00:33:08.067 "name": "pt2", 00:33:08.067 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:08.067 "is_configured": true, 00:33:08.067 "data_offset": 256, 00:33:08.067 "data_size": 7936 00:33:08.067 } 00:33:08.067 ] 00:33:08.067 }' 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:08.067 13:53:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:08.635 13:53:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:33:08.635 13:53:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.635 13:53:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:08.635 13:53:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:08.635 13:53:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.635 13:53:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:33:08.635 13:53:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:08.635 13:53:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.635 13:53:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:33:08.635 13:53:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:08.635 [2024-11-20 13:53:11.496142] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:08.635 13:53:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.635 13:53:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 92cc2ec9-7dae-469c-b2f6-56f21bc88ae2 '!=' 92cc2ec9-7dae-469c-b2f6-56f21bc88ae2 ']' 00:33:08.635 13:53:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89417 00:33:08.635 13:53:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89417 ']' 00:33:08.635 13:53:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89417 00:33:08.894 13:53:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:33:08.894 13:53:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:08.894 13:53:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89417 00:33:08.894 killing process with pid 89417 00:33:08.894 13:53:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:08.894 13:53:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:08.894 13:53:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89417' 00:33:08.894 13:53:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89417 00:33:08.894 [2024-11-20 13:53:11.576313] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:08.894 13:53:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89417 00:33:08.894 [2024-11-20 13:53:11.576418] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:08.894 [2024-11-20 13:53:11.576485] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:08.894 [2024-11-20 13:53:11.576509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:33:08.894 [2024-11-20 13:53:11.739594] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:09.830 ************************************ 00:33:09.830 END TEST raid_superblock_test_md_interleaved 00:33:09.830 ************************************ 00:33:09.830 13:53:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:33:09.830 00:33:09.830 real 0m6.734s 00:33:09.830 user 0m10.726s 00:33:09.830 sys 0m1.046s 00:33:09.830 13:53:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:09.830 13:53:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:09.830 13:53:12 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:33:09.830 13:53:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:33:09.830 13:53:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:09.830 13:53:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:09.830 ************************************ 00:33:09.830 START TEST raid_rebuild_test_sb_md_interleaved 00:33:09.830 ************************************ 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:33:09.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89741 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89741 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89741 ']' 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:09.830 13:53:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:10.089 [2024-11-20 13:53:12.836505] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:33:10.089 [2024-11-20 13:53:12.836991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89741 ] 00:33:10.089 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:10.089 Zero copy mechanism will not be used. 00:33:10.348 [2024-11-20 13:53:13.010996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.348 [2024-11-20 13:53:13.124255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:10.625 [2024-11-20 13:53:13.309866] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:10.625 [2024-11-20 13:53:13.310154] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:10.883 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:10.883 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:33:10.883 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:10.883 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:33:10.883 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.883 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:11.142 BaseBdev1_malloc 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:11.143 [2024-11-20 13:53:13.818741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:11.143 [2024-11-20 13:53:13.818829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:11.143 [2024-11-20 13:53:13.818861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:11.143 [2024-11-20 13:53:13.818882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:11.143 [2024-11-20 13:53:13.821600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:11.143 [2024-11-20 13:53:13.821653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:11.143 BaseBdev1 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:11.143 BaseBdev2_malloc 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:11.143 [2024-11-20 13:53:13.869030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:11.143 [2024-11-20 13:53:13.869505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:11.143 [2024-11-20 13:53:13.869547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:11.143 [2024-11-20 13:53:13.869579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:11.143 [2024-11-20 13:53:13.871851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:11.143 [2024-11-20 13:53:13.871920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:11.143 BaseBdev2 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:11.143 spare_malloc 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:11.143 spare_delay 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:11.143 [2024-11-20 13:53:13.939384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:11.143 [2024-11-20 13:53:13.939473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:11.143 [2024-11-20 13:53:13.939504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:11.143 [2024-11-20 13:53:13.939523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:11.143 [2024-11-20 13:53:13.941802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:11.143 [2024-11-20 13:53:13.941854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:11.143 spare 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:11.143 [2024-11-20 13:53:13.947423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:11.143 [2024-11-20 13:53:13.949686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:11.143 [2024-11-20 13:53:13.950078] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:33:11.143 [2024-11-20 13:53:13.950246] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:11.143 [2024-11-20 13:53:13.950460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:11.143 [2024-11-20 13:53:13.950688] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:33:11.143 [2024-11-20 13:53:13.950811] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:33:11.143 [2024-11-20 13:53:13.951101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:11.143 13:53:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.143 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:11.143 "name": "raid_bdev1", 00:33:11.143 "uuid": "9dde5cb5-4088-4160-82c5-cb6661c43855", 00:33:11.143 "strip_size_kb": 0, 00:33:11.143 "state": "online", 00:33:11.143 "raid_level": "raid1", 00:33:11.143 "superblock": true, 00:33:11.143 "num_base_bdevs": 2, 00:33:11.143 "num_base_bdevs_discovered": 2, 00:33:11.143 "num_base_bdevs_operational": 2, 00:33:11.143 "base_bdevs_list": [ 00:33:11.143 { 00:33:11.143 "name": "BaseBdev1", 00:33:11.143 "uuid": "1dd019b2-2b2e-55cc-87da-6526262a7545", 00:33:11.143 "is_configured": true, 00:33:11.143 "data_offset": 256, 00:33:11.143 "data_size": 7936 00:33:11.143 }, 00:33:11.143 { 00:33:11.143 "name": "BaseBdev2", 00:33:11.143 "uuid": "e5e2e90e-18fb-5a5b-994d-d6306b1e34cc", 00:33:11.143 "is_configured": true, 00:33:11.143 "data_offset": 256, 00:33:11.143 "data_size": 7936 00:33:11.143 } 00:33:11.143 ] 00:33:11.143 }' 00:33:11.143 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:11.143 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:11.710 [2024-11-20 13:53:14.475931] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:11.710 [2024-11-20 13:53:14.579538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:11.710 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.969 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:11.969 "name": "raid_bdev1", 00:33:11.969 "uuid": "9dde5cb5-4088-4160-82c5-cb6661c43855", 00:33:11.969 "strip_size_kb": 0, 00:33:11.969 "state": "online", 00:33:11.969 "raid_level": "raid1", 00:33:11.969 "superblock": true, 00:33:11.969 "num_base_bdevs": 2, 00:33:11.969 "num_base_bdevs_discovered": 1, 00:33:11.969 "num_base_bdevs_operational": 1, 00:33:11.969 "base_bdevs_list": [ 00:33:11.969 { 00:33:11.969 "name": null, 00:33:11.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:11.969 "is_configured": false, 00:33:11.969 "data_offset": 0, 00:33:11.969 "data_size": 7936 00:33:11.969 }, 00:33:11.969 { 00:33:11.969 "name": "BaseBdev2", 00:33:11.969 "uuid": "e5e2e90e-18fb-5a5b-994d-d6306b1e34cc", 00:33:11.969 "is_configured": true, 00:33:11.969 "data_offset": 256, 00:33:11.969 "data_size": 7936 00:33:11.969 } 00:33:11.969 ] 00:33:11.969 }' 00:33:11.969 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:11.969 13:53:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:12.228 13:53:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:12.228 13:53:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.228 13:53:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:12.228 [2024-11-20 13:53:15.099747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:12.228 [2024-11-20 13:53:15.115761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:12.228 13:53:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.228 13:53:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:33:12.228 [2024-11-20 13:53:15.118155] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:13.607 "name": "raid_bdev1", 00:33:13.607 "uuid": "9dde5cb5-4088-4160-82c5-cb6661c43855", 00:33:13.607 "strip_size_kb": 0, 00:33:13.607 "state": "online", 00:33:13.607 "raid_level": "raid1", 00:33:13.607 "superblock": true, 00:33:13.607 "num_base_bdevs": 2, 00:33:13.607 "num_base_bdevs_discovered": 2, 00:33:13.607 "num_base_bdevs_operational": 2, 00:33:13.607 "process": { 00:33:13.607 "type": "rebuild", 00:33:13.607 "target": "spare", 00:33:13.607 "progress": { 00:33:13.607 "blocks": 2560, 00:33:13.607 "percent": 32 00:33:13.607 } 00:33:13.607 }, 00:33:13.607 "base_bdevs_list": [ 00:33:13.607 { 00:33:13.607 "name": "spare", 00:33:13.607 "uuid": "be3b32bc-b0a1-503d-81e0-cd8431fce1bd", 00:33:13.607 "is_configured": true, 00:33:13.607 "data_offset": 256, 00:33:13.607 "data_size": 7936 00:33:13.607 }, 00:33:13.607 { 00:33:13.607 "name": "BaseBdev2", 00:33:13.607 "uuid": "e5e2e90e-18fb-5a5b-994d-d6306b1e34cc", 00:33:13.607 "is_configured": true, 00:33:13.607 "data_offset": 256, 00:33:13.607 "data_size": 7936 00:33:13.607 } 00:33:13.607 ] 00:33:13.607 }' 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:13.607 [2024-11-20 13:53:16.287536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:13.607 [2024-11-20 13:53:16.327487] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:13.607 [2024-11-20 13:53:16.327617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:13.607 [2024-11-20 13:53:16.327644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:13.607 [2024-11-20 13:53:16.327683] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:13.607 "name": "raid_bdev1", 00:33:13.607 "uuid": "9dde5cb5-4088-4160-82c5-cb6661c43855", 00:33:13.607 "strip_size_kb": 0, 00:33:13.607 "state": "online", 00:33:13.607 "raid_level": "raid1", 00:33:13.607 "superblock": true, 00:33:13.607 "num_base_bdevs": 2, 00:33:13.607 "num_base_bdevs_discovered": 1, 00:33:13.607 "num_base_bdevs_operational": 1, 00:33:13.607 "base_bdevs_list": [ 00:33:13.607 { 00:33:13.607 "name": null, 00:33:13.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:13.607 "is_configured": false, 00:33:13.607 "data_offset": 0, 00:33:13.607 "data_size": 7936 00:33:13.607 }, 00:33:13.607 { 00:33:13.607 "name": "BaseBdev2", 00:33:13.607 "uuid": "e5e2e90e-18fb-5a5b-994d-d6306b1e34cc", 00:33:13.607 "is_configured": true, 00:33:13.607 "data_offset": 256, 00:33:13.607 "data_size": 7936 00:33:13.607 } 00:33:13.607 ] 00:33:13.607 }' 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:13.607 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:14.175 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:14.175 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:14.175 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:14.175 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:14.175 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:14.175 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:14.175 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.175 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:14.175 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:14.175 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.175 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:14.175 "name": "raid_bdev1", 00:33:14.175 "uuid": "9dde5cb5-4088-4160-82c5-cb6661c43855", 00:33:14.175 "strip_size_kb": 0, 00:33:14.175 "state": "online", 00:33:14.175 "raid_level": "raid1", 00:33:14.175 "superblock": true, 00:33:14.175 "num_base_bdevs": 2, 00:33:14.175 "num_base_bdevs_discovered": 1, 00:33:14.175 "num_base_bdevs_operational": 1, 00:33:14.175 "base_bdevs_list": [ 00:33:14.175 { 00:33:14.175 "name": null, 00:33:14.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:14.175 "is_configured": false, 00:33:14.175 "data_offset": 0, 00:33:14.175 "data_size": 7936 00:33:14.175 }, 00:33:14.175 { 00:33:14.175 "name": "BaseBdev2", 00:33:14.175 "uuid": "e5e2e90e-18fb-5a5b-994d-d6306b1e34cc", 00:33:14.175 "is_configured": true, 00:33:14.175 "data_offset": 256, 00:33:14.175 "data_size": 7936 00:33:14.175 } 00:33:14.175 ] 00:33:14.175 }' 00:33:14.175 13:53:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:14.175 13:53:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:14.175 13:53:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:14.175 13:53:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:14.175 13:53:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:14.175 13:53:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.175 13:53:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:14.176 [2024-11-20 13:53:17.066312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:14.176 [2024-11-20 13:53:17.081085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:33:14.176 13:53:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.176 13:53:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:33:14.176 [2024-11-20 13:53:17.083581] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:15.628 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:15.628 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:15.628 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:15.628 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:15.628 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:15.628 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:15.628 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.628 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:15.628 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:15.628 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.628 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:15.628 "name": "raid_bdev1", 00:33:15.628 "uuid": "9dde5cb5-4088-4160-82c5-cb6661c43855", 00:33:15.628 "strip_size_kb": 0, 00:33:15.628 "state": "online", 00:33:15.628 "raid_level": "raid1", 00:33:15.628 "superblock": true, 00:33:15.628 "num_base_bdevs": 2, 00:33:15.628 "num_base_bdevs_discovered": 2, 00:33:15.628 "num_base_bdevs_operational": 2, 00:33:15.628 "process": { 00:33:15.628 "type": "rebuild", 00:33:15.628 "target": "spare", 00:33:15.628 "progress": { 00:33:15.628 "blocks": 2560, 00:33:15.628 "percent": 32 00:33:15.628 } 00:33:15.628 }, 00:33:15.628 "base_bdevs_list": [ 00:33:15.628 { 00:33:15.628 "name": "spare", 00:33:15.628 "uuid": "be3b32bc-b0a1-503d-81e0-cd8431fce1bd", 00:33:15.628 "is_configured": true, 00:33:15.628 "data_offset": 256, 00:33:15.628 "data_size": 7936 00:33:15.628 }, 00:33:15.628 { 00:33:15.628 "name": "BaseBdev2", 00:33:15.628 "uuid": "e5e2e90e-18fb-5a5b-994d-d6306b1e34cc", 00:33:15.628 "is_configured": true, 00:33:15.628 "data_offset": 256, 00:33:15.628 "data_size": 7936 00:33:15.628 } 00:33:15.628 ] 00:33:15.628 }' 00:33:15.628 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:15.628 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:15.628 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:33:15.629 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=809 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:15.629 "name": "raid_bdev1", 00:33:15.629 "uuid": "9dde5cb5-4088-4160-82c5-cb6661c43855", 00:33:15.629 "strip_size_kb": 0, 00:33:15.629 "state": "online", 00:33:15.629 "raid_level": "raid1", 00:33:15.629 "superblock": true, 00:33:15.629 "num_base_bdevs": 2, 00:33:15.629 "num_base_bdevs_discovered": 2, 00:33:15.629 "num_base_bdevs_operational": 2, 00:33:15.629 "process": { 00:33:15.629 "type": "rebuild", 00:33:15.629 "target": "spare", 00:33:15.629 "progress": { 00:33:15.629 "blocks": 2816, 00:33:15.629 "percent": 35 00:33:15.629 } 00:33:15.629 }, 00:33:15.629 "base_bdevs_list": [ 00:33:15.629 { 00:33:15.629 "name": "spare", 00:33:15.629 "uuid": "be3b32bc-b0a1-503d-81e0-cd8431fce1bd", 00:33:15.629 "is_configured": true, 00:33:15.629 "data_offset": 256, 00:33:15.629 "data_size": 7936 00:33:15.629 }, 00:33:15.629 { 00:33:15.629 "name": "BaseBdev2", 00:33:15.629 "uuid": "e5e2e90e-18fb-5a5b-994d-d6306b1e34cc", 00:33:15.629 "is_configured": true, 00:33:15.629 "data_offset": 256, 00:33:15.629 "data_size": 7936 00:33:15.629 } 00:33:15.629 ] 00:33:15.629 }' 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:15.629 13:53:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:16.585 13:53:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:16.585 13:53:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:16.585 13:53:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:16.585 13:53:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:16.585 13:53:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:16.585 13:53:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:16.585 13:53:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.585 13:53:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:16.585 13:53:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.585 13:53:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:16.585 13:53:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.585 13:53:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:16.585 "name": "raid_bdev1", 00:33:16.585 "uuid": "9dde5cb5-4088-4160-82c5-cb6661c43855", 00:33:16.585 "strip_size_kb": 0, 00:33:16.585 "state": "online", 00:33:16.585 "raid_level": "raid1", 00:33:16.585 "superblock": true, 00:33:16.585 "num_base_bdevs": 2, 00:33:16.585 "num_base_bdevs_discovered": 2, 00:33:16.585 "num_base_bdevs_operational": 2, 00:33:16.585 "process": { 00:33:16.585 "type": "rebuild", 00:33:16.585 "target": "spare", 00:33:16.585 "progress": { 00:33:16.585 "blocks": 5888, 00:33:16.585 "percent": 74 00:33:16.585 } 00:33:16.585 }, 00:33:16.585 "base_bdevs_list": [ 00:33:16.585 { 00:33:16.585 "name": "spare", 00:33:16.585 "uuid": "be3b32bc-b0a1-503d-81e0-cd8431fce1bd", 00:33:16.585 "is_configured": true, 00:33:16.585 "data_offset": 256, 00:33:16.585 "data_size": 7936 00:33:16.585 }, 00:33:16.585 { 00:33:16.585 "name": "BaseBdev2", 00:33:16.585 "uuid": "e5e2e90e-18fb-5a5b-994d-d6306b1e34cc", 00:33:16.585 "is_configured": true, 00:33:16.585 "data_offset": 256, 00:33:16.585 "data_size": 7936 00:33:16.585 } 00:33:16.585 ] 00:33:16.585 }' 00:33:16.585 13:53:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:16.845 13:53:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:16.845 13:53:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:16.845 13:53:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:16.845 13:53:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:17.412 [2024-11-20 13:53:20.205983] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:17.412 [2024-11-20 13:53:20.206073] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:17.412 [2024-11-20 13:53:20.206237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:17.671 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:17.671 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:17.671 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:17.671 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:17.671 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:17.671 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:17.930 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:17.930 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:17.930 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.930 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:17.930 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.930 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:17.930 "name": "raid_bdev1", 00:33:17.930 "uuid": "9dde5cb5-4088-4160-82c5-cb6661c43855", 00:33:17.930 "strip_size_kb": 0, 00:33:17.930 "state": "online", 00:33:17.930 "raid_level": "raid1", 00:33:17.930 "superblock": true, 00:33:17.930 "num_base_bdevs": 2, 00:33:17.930 "num_base_bdevs_discovered": 2, 00:33:17.930 "num_base_bdevs_operational": 2, 00:33:17.930 "base_bdevs_list": [ 00:33:17.930 { 00:33:17.930 "name": "spare", 00:33:17.930 "uuid": "be3b32bc-b0a1-503d-81e0-cd8431fce1bd", 00:33:17.930 "is_configured": true, 00:33:17.930 "data_offset": 256, 00:33:17.930 "data_size": 7936 00:33:17.930 }, 00:33:17.930 { 00:33:17.930 "name": "BaseBdev2", 00:33:17.930 "uuid": "e5e2e90e-18fb-5a5b-994d-d6306b1e34cc", 00:33:17.930 "is_configured": true, 00:33:17.930 "data_offset": 256, 00:33:17.930 "data_size": 7936 00:33:17.930 } 00:33:17.930 ] 00:33:17.930 }' 00:33:17.930 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:17.930 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:17.930 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:17.930 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:33:17.930 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:33:17.930 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:17.930 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:17.930 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:17.930 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:17.930 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:17.930 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:17.930 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.930 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:17.930 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:17.930 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.930 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:17.930 "name": "raid_bdev1", 00:33:17.930 "uuid": "9dde5cb5-4088-4160-82c5-cb6661c43855", 00:33:17.930 "strip_size_kb": 0, 00:33:17.930 "state": "online", 00:33:17.930 "raid_level": "raid1", 00:33:17.930 "superblock": true, 00:33:17.930 "num_base_bdevs": 2, 00:33:17.930 "num_base_bdevs_discovered": 2, 00:33:17.930 "num_base_bdevs_operational": 2, 00:33:17.930 "base_bdevs_list": [ 00:33:17.930 { 00:33:17.930 "name": "spare", 00:33:17.930 "uuid": "be3b32bc-b0a1-503d-81e0-cd8431fce1bd", 00:33:17.930 "is_configured": true, 00:33:17.930 "data_offset": 256, 00:33:17.930 "data_size": 7936 00:33:17.930 }, 00:33:17.930 { 00:33:17.930 "name": "BaseBdev2", 00:33:17.930 "uuid": "e5e2e90e-18fb-5a5b-994d-d6306b1e34cc", 00:33:17.930 "is_configured": true, 00:33:17.931 "data_offset": 256, 00:33:17.931 "data_size": 7936 00:33:17.931 } 00:33:17.931 ] 00:33:17.931 }' 00:33:17.931 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:18.189 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:18.189 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:18.189 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:18.189 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:18.189 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:18.189 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:18.189 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:18.189 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:18.189 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:18.189 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:18.189 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:18.189 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:18.189 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:18.189 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:18.189 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:18.189 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.189 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:18.189 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.189 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:18.190 "name": "raid_bdev1", 00:33:18.190 "uuid": "9dde5cb5-4088-4160-82c5-cb6661c43855", 00:33:18.190 "strip_size_kb": 0, 00:33:18.190 "state": "online", 00:33:18.190 "raid_level": "raid1", 00:33:18.190 "superblock": true, 00:33:18.190 "num_base_bdevs": 2, 00:33:18.190 "num_base_bdevs_discovered": 2, 00:33:18.190 "num_base_bdevs_operational": 2, 00:33:18.190 "base_bdevs_list": [ 00:33:18.190 { 00:33:18.190 "name": "spare", 00:33:18.190 "uuid": "be3b32bc-b0a1-503d-81e0-cd8431fce1bd", 00:33:18.190 "is_configured": true, 00:33:18.190 "data_offset": 256, 00:33:18.190 "data_size": 7936 00:33:18.190 }, 00:33:18.190 { 00:33:18.190 "name": "BaseBdev2", 00:33:18.190 "uuid": "e5e2e90e-18fb-5a5b-994d-d6306b1e34cc", 00:33:18.190 "is_configured": true, 00:33:18.190 "data_offset": 256, 00:33:18.190 "data_size": 7936 00:33:18.190 } 00:33:18.190 ] 00:33:18.190 }' 00:33:18.190 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:18.190 13:53:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:18.757 [2024-11-20 13:53:21.428993] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:18.757 [2024-11-20 13:53:21.429055] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:18.757 [2024-11-20 13:53:21.429213] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:18.757 [2024-11-20 13:53:21.429326] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:18.757 [2024-11-20 13:53:21.429347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:18.757 [2024-11-20 13:53:21.504965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:18.757 [2024-11-20 13:53:21.505053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:18.757 [2024-11-20 13:53:21.505093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:33:18.757 [2024-11-20 13:53:21.505111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:18.757 [2024-11-20 13:53:21.507766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:18.757 [2024-11-20 13:53:21.507816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:18.757 [2024-11-20 13:53:21.507924] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:18.757 [2024-11-20 13:53:21.508007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:18.757 [2024-11-20 13:53:21.508182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:18.757 spare 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:18.757 [2024-11-20 13:53:21.608372] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:33:18.757 [2024-11-20 13:53:21.608780] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:18.757 [2024-11-20 13:53:21.609044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:33:18.757 [2024-11-20 13:53:21.609213] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:33:18.757 [2024-11-20 13:53:21.609235] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:33:18.757 [2024-11-20 13:53:21.609424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:18.757 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:18.758 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:18.758 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:18.758 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:18.758 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:18.758 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.758 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:18.758 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.758 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:18.758 "name": "raid_bdev1", 00:33:18.758 "uuid": "9dde5cb5-4088-4160-82c5-cb6661c43855", 00:33:18.758 "strip_size_kb": 0, 00:33:18.758 "state": "online", 00:33:18.758 "raid_level": "raid1", 00:33:18.758 "superblock": true, 00:33:18.758 "num_base_bdevs": 2, 00:33:18.758 "num_base_bdevs_discovered": 2, 00:33:18.758 "num_base_bdevs_operational": 2, 00:33:18.758 "base_bdevs_list": [ 00:33:18.758 { 00:33:18.758 "name": "spare", 00:33:18.758 "uuid": "be3b32bc-b0a1-503d-81e0-cd8431fce1bd", 00:33:18.758 "is_configured": true, 00:33:18.758 "data_offset": 256, 00:33:18.758 "data_size": 7936 00:33:18.758 }, 00:33:18.758 { 00:33:18.758 "name": "BaseBdev2", 00:33:18.758 "uuid": "e5e2e90e-18fb-5a5b-994d-d6306b1e34cc", 00:33:18.758 "is_configured": true, 00:33:18.758 "data_offset": 256, 00:33:18.758 "data_size": 7936 00:33:18.758 } 00:33:18.758 ] 00:33:18.758 }' 00:33:18.758 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:18.758 13:53:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:19.325 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:19.325 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:19.325 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:19.325 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:19.325 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:19.325 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:19.325 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.325 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:19.325 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:19.325 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.325 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:19.325 "name": "raid_bdev1", 00:33:19.325 "uuid": "9dde5cb5-4088-4160-82c5-cb6661c43855", 00:33:19.325 "strip_size_kb": 0, 00:33:19.325 "state": "online", 00:33:19.325 "raid_level": "raid1", 00:33:19.325 "superblock": true, 00:33:19.325 "num_base_bdevs": 2, 00:33:19.325 "num_base_bdevs_discovered": 2, 00:33:19.325 "num_base_bdevs_operational": 2, 00:33:19.325 "base_bdevs_list": [ 00:33:19.325 { 00:33:19.325 "name": "spare", 00:33:19.325 "uuid": "be3b32bc-b0a1-503d-81e0-cd8431fce1bd", 00:33:19.325 "is_configured": true, 00:33:19.325 "data_offset": 256, 00:33:19.325 "data_size": 7936 00:33:19.325 }, 00:33:19.325 { 00:33:19.325 "name": "BaseBdev2", 00:33:19.326 "uuid": "e5e2e90e-18fb-5a5b-994d-d6306b1e34cc", 00:33:19.326 "is_configured": true, 00:33:19.326 "data_offset": 256, 00:33:19.326 "data_size": 7936 00:33:19.326 } 00:33:19.326 ] 00:33:19.326 }' 00:33:19.326 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:19.326 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:19.326 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:19.584 [2024-11-20 13:53:22.345797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:19.584 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.585 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:19.585 "name": "raid_bdev1", 00:33:19.585 "uuid": "9dde5cb5-4088-4160-82c5-cb6661c43855", 00:33:19.585 "strip_size_kb": 0, 00:33:19.585 "state": "online", 00:33:19.585 "raid_level": "raid1", 00:33:19.585 "superblock": true, 00:33:19.585 "num_base_bdevs": 2, 00:33:19.585 "num_base_bdevs_discovered": 1, 00:33:19.585 "num_base_bdevs_operational": 1, 00:33:19.585 "base_bdevs_list": [ 00:33:19.585 { 00:33:19.585 "name": null, 00:33:19.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:19.585 "is_configured": false, 00:33:19.585 "data_offset": 0, 00:33:19.585 "data_size": 7936 00:33:19.585 }, 00:33:19.585 { 00:33:19.585 "name": "BaseBdev2", 00:33:19.585 "uuid": "e5e2e90e-18fb-5a5b-994d-d6306b1e34cc", 00:33:19.585 "is_configured": true, 00:33:19.585 "data_offset": 256, 00:33:19.585 "data_size": 7936 00:33:19.585 } 00:33:19.585 ] 00:33:19.585 }' 00:33:19.585 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:19.585 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:20.153 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:20.153 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.153 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:20.153 [2024-11-20 13:53:22.905937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:20.153 [2024-11-20 13:53:22.906234] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:20.153 [2024-11-20 13:53:22.906266] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:20.153 [2024-11-20 13:53:22.906339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:20.153 [2024-11-20 13:53:22.923177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:33:20.153 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.153 13:53:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:33:20.153 [2024-11-20 13:53:22.925989] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:21.091 13:53:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:21.091 13:53:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:21.091 13:53:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:21.091 13:53:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:21.091 13:53:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:21.091 13:53:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:21.091 13:53:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.091 13:53:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:21.091 13:53:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:21.091 13:53:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.091 13:53:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:21.091 "name": "raid_bdev1", 00:33:21.091 "uuid": "9dde5cb5-4088-4160-82c5-cb6661c43855", 00:33:21.091 "strip_size_kb": 0, 00:33:21.091 "state": "online", 00:33:21.091 "raid_level": "raid1", 00:33:21.091 "superblock": true, 00:33:21.091 "num_base_bdevs": 2, 00:33:21.091 "num_base_bdevs_discovered": 2, 00:33:21.091 "num_base_bdevs_operational": 2, 00:33:21.091 "process": { 00:33:21.091 "type": "rebuild", 00:33:21.091 "target": "spare", 00:33:21.091 "progress": { 00:33:21.091 "blocks": 2560, 00:33:21.091 "percent": 32 00:33:21.091 } 00:33:21.091 }, 00:33:21.091 "base_bdevs_list": [ 00:33:21.091 { 00:33:21.091 "name": "spare", 00:33:21.091 "uuid": "be3b32bc-b0a1-503d-81e0-cd8431fce1bd", 00:33:21.091 "is_configured": true, 00:33:21.091 "data_offset": 256, 00:33:21.091 "data_size": 7936 00:33:21.091 }, 00:33:21.091 { 00:33:21.091 "name": "BaseBdev2", 00:33:21.091 "uuid": "e5e2e90e-18fb-5a5b-994d-d6306b1e34cc", 00:33:21.091 "is_configured": true, 00:33:21.091 "data_offset": 256, 00:33:21.091 "data_size": 7936 00:33:21.091 } 00:33:21.091 ] 00:33:21.091 }' 00:33:21.091 13:53:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:21.350 [2024-11-20 13:53:24.108010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:21.350 [2024-11-20 13:53:24.136668] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:21.350 [2024-11-20 13:53:24.136799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:21.350 [2024-11-20 13:53:24.136825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:21.350 [2024-11-20 13:53:24.136842] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:21.350 "name": "raid_bdev1", 00:33:21.350 "uuid": "9dde5cb5-4088-4160-82c5-cb6661c43855", 00:33:21.350 "strip_size_kb": 0, 00:33:21.350 "state": "online", 00:33:21.350 "raid_level": "raid1", 00:33:21.350 "superblock": true, 00:33:21.350 "num_base_bdevs": 2, 00:33:21.350 "num_base_bdevs_discovered": 1, 00:33:21.350 "num_base_bdevs_operational": 1, 00:33:21.350 "base_bdevs_list": [ 00:33:21.350 { 00:33:21.350 "name": null, 00:33:21.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:21.350 "is_configured": false, 00:33:21.350 "data_offset": 0, 00:33:21.350 "data_size": 7936 00:33:21.350 }, 00:33:21.350 { 00:33:21.350 "name": "BaseBdev2", 00:33:21.350 "uuid": "e5e2e90e-18fb-5a5b-994d-d6306b1e34cc", 00:33:21.350 "is_configured": true, 00:33:21.350 "data_offset": 256, 00:33:21.350 "data_size": 7936 00:33:21.350 } 00:33:21.350 ] 00:33:21.350 }' 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:21.350 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:21.919 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:21.919 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.919 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:21.919 [2024-11-20 13:53:24.670533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:21.919 [2024-11-20 13:53:24.670656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:21.919 [2024-11-20 13:53:24.670699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:33:21.919 [2024-11-20 13:53:24.670721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:21.919 [2024-11-20 13:53:24.671047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:21.919 [2024-11-20 13:53:24.671081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:21.919 [2024-11-20 13:53:24.671197] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:21.919 [2024-11-20 13:53:24.671225] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:21.919 [2024-11-20 13:53:24.671250] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:21.919 [2024-11-20 13:53:24.671294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:21.919 [2024-11-20 13:53:24.687229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:21.919 spare 00:33:21.919 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.919 13:53:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:33:21.919 [2024-11-20 13:53:24.689781] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:22.855 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:22.855 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:22.855 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:22.855 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:22.855 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:22.855 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:22.855 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.855 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:22.855 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:22.855 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.855 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:22.855 "name": "raid_bdev1", 00:33:22.855 "uuid": "9dde5cb5-4088-4160-82c5-cb6661c43855", 00:33:22.855 "strip_size_kb": 0, 00:33:22.855 "state": "online", 00:33:22.855 "raid_level": "raid1", 00:33:22.855 "superblock": true, 00:33:22.855 "num_base_bdevs": 2, 00:33:22.855 "num_base_bdevs_discovered": 2, 00:33:22.855 "num_base_bdevs_operational": 2, 00:33:22.855 "process": { 00:33:22.855 "type": "rebuild", 00:33:22.855 "target": "spare", 00:33:22.855 "progress": { 00:33:22.855 "blocks": 2560, 00:33:22.855 "percent": 32 00:33:22.855 } 00:33:22.855 }, 00:33:22.855 "base_bdevs_list": [ 00:33:22.855 { 00:33:22.855 "name": "spare", 00:33:22.855 "uuid": "be3b32bc-b0a1-503d-81e0-cd8431fce1bd", 00:33:22.855 "is_configured": true, 00:33:22.855 "data_offset": 256, 00:33:22.855 "data_size": 7936 00:33:22.855 }, 00:33:22.855 { 00:33:22.855 "name": "BaseBdev2", 00:33:22.855 "uuid": "e5e2e90e-18fb-5a5b-994d-d6306b1e34cc", 00:33:22.855 "is_configured": true, 00:33:22.855 "data_offset": 256, 00:33:22.855 "data_size": 7936 00:33:22.855 } 00:33:22.855 ] 00:33:22.855 }' 00:33:22.856 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:23.114 [2024-11-20 13:53:25.867351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:23.114 [2024-11-20 13:53:25.898760] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:23.114 [2024-11-20 13:53:25.899055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:23.114 [2024-11-20 13:53:25.899244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:23.114 [2024-11-20 13:53:25.899320] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:23.114 "name": "raid_bdev1", 00:33:23.114 "uuid": "9dde5cb5-4088-4160-82c5-cb6661c43855", 00:33:23.114 "strip_size_kb": 0, 00:33:23.114 "state": "online", 00:33:23.114 "raid_level": "raid1", 00:33:23.114 "superblock": true, 00:33:23.114 "num_base_bdevs": 2, 00:33:23.114 "num_base_bdevs_discovered": 1, 00:33:23.114 "num_base_bdevs_operational": 1, 00:33:23.114 "base_bdevs_list": [ 00:33:23.114 { 00:33:23.114 "name": null, 00:33:23.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:23.114 "is_configured": false, 00:33:23.114 "data_offset": 0, 00:33:23.114 "data_size": 7936 00:33:23.114 }, 00:33:23.114 { 00:33:23.114 "name": "BaseBdev2", 00:33:23.114 "uuid": "e5e2e90e-18fb-5a5b-994d-d6306b1e34cc", 00:33:23.114 "is_configured": true, 00:33:23.114 "data_offset": 256, 00:33:23.114 "data_size": 7936 00:33:23.114 } 00:33:23.114 ] 00:33:23.114 }' 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:23.114 13:53:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:23.688 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:23.688 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:23.688 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:23.688 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:23.688 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:23.688 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:23.688 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.688 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:23.688 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:23.688 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.688 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:23.688 "name": "raid_bdev1", 00:33:23.688 "uuid": "9dde5cb5-4088-4160-82c5-cb6661c43855", 00:33:23.688 "strip_size_kb": 0, 00:33:23.688 "state": "online", 00:33:23.688 "raid_level": "raid1", 00:33:23.688 "superblock": true, 00:33:23.688 "num_base_bdevs": 2, 00:33:23.688 "num_base_bdevs_discovered": 1, 00:33:23.688 "num_base_bdevs_operational": 1, 00:33:23.688 "base_bdevs_list": [ 00:33:23.688 { 00:33:23.688 "name": null, 00:33:23.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:23.688 "is_configured": false, 00:33:23.688 "data_offset": 0, 00:33:23.688 "data_size": 7936 00:33:23.688 }, 00:33:23.688 { 00:33:23.688 "name": "BaseBdev2", 00:33:23.688 "uuid": "e5e2e90e-18fb-5a5b-994d-d6306b1e34cc", 00:33:23.688 "is_configured": true, 00:33:23.688 "data_offset": 256, 00:33:23.688 "data_size": 7936 00:33:23.688 } 00:33:23.688 ] 00:33:23.688 }' 00:33:23.688 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:23.688 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:23.688 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:23.946 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:23.946 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:33:23.946 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.946 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:23.946 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.946 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:23.946 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.947 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:23.947 [2024-11-20 13:53:26.656915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:23.947 [2024-11-20 13:53:26.657015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:23.947 [2024-11-20 13:53:26.657058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:33:23.947 [2024-11-20 13:53:26.657075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:23.947 [2024-11-20 13:53:26.657350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:23.947 [2024-11-20 13:53:26.657375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:23.947 [2024-11-20 13:53:26.657447] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:33:23.947 [2024-11-20 13:53:26.657469] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:23.947 [2024-11-20 13:53:26.657484] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:23.947 [2024-11-20 13:53:26.657499] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:33:23.947 BaseBdev1 00:33:23.947 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.947 13:53:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:33:24.890 13:53:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:24.890 13:53:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:24.891 13:53:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:24.891 13:53:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:24.891 13:53:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:24.891 13:53:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:24.891 13:53:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:24.891 13:53:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:24.891 13:53:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:24.891 13:53:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:24.891 13:53:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:24.891 13:53:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:24.891 13:53:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.891 13:53:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:24.891 13:53:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.891 13:53:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:24.891 "name": "raid_bdev1", 00:33:24.891 "uuid": "9dde5cb5-4088-4160-82c5-cb6661c43855", 00:33:24.891 "strip_size_kb": 0, 00:33:24.891 "state": "online", 00:33:24.891 "raid_level": "raid1", 00:33:24.891 "superblock": true, 00:33:24.891 "num_base_bdevs": 2, 00:33:24.891 "num_base_bdevs_discovered": 1, 00:33:24.891 "num_base_bdevs_operational": 1, 00:33:24.891 "base_bdevs_list": [ 00:33:24.891 { 00:33:24.891 "name": null, 00:33:24.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:24.891 "is_configured": false, 00:33:24.891 "data_offset": 0, 00:33:24.891 "data_size": 7936 00:33:24.891 }, 00:33:24.891 { 00:33:24.891 "name": "BaseBdev2", 00:33:24.891 "uuid": "e5e2e90e-18fb-5a5b-994d-d6306b1e34cc", 00:33:24.891 "is_configured": true, 00:33:24.891 "data_offset": 256, 00:33:24.891 "data_size": 7936 00:33:24.891 } 00:33:24.891 ] 00:33:24.891 }' 00:33:24.891 13:53:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:24.891 13:53:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:25.459 "name": "raid_bdev1", 00:33:25.459 "uuid": "9dde5cb5-4088-4160-82c5-cb6661c43855", 00:33:25.459 "strip_size_kb": 0, 00:33:25.459 "state": "online", 00:33:25.459 "raid_level": "raid1", 00:33:25.459 "superblock": true, 00:33:25.459 "num_base_bdevs": 2, 00:33:25.459 "num_base_bdevs_discovered": 1, 00:33:25.459 "num_base_bdevs_operational": 1, 00:33:25.459 "base_bdevs_list": [ 00:33:25.459 { 00:33:25.459 "name": null, 00:33:25.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:25.459 "is_configured": false, 00:33:25.459 "data_offset": 0, 00:33:25.459 "data_size": 7936 00:33:25.459 }, 00:33:25.459 { 00:33:25.459 "name": "BaseBdev2", 00:33:25.459 "uuid": "e5e2e90e-18fb-5a5b-994d-d6306b1e34cc", 00:33:25.459 "is_configured": true, 00:33:25.459 "data_offset": 256, 00:33:25.459 "data_size": 7936 00:33:25.459 } 00:33:25.459 ] 00:33:25.459 }' 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.459 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:25.459 [2024-11-20 13:53:28.369553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:25.459 [2024-11-20 13:53:28.369800] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:25.459 [2024-11-20 13:53:28.369832] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:25.718 request: 00:33:25.719 { 00:33:25.719 "base_bdev": "BaseBdev1", 00:33:25.719 "raid_bdev": "raid_bdev1", 00:33:25.719 "method": "bdev_raid_add_base_bdev", 00:33:25.719 "req_id": 1 00:33:25.719 } 00:33:25.719 Got JSON-RPC error response 00:33:25.719 response: 00:33:25.719 { 00:33:25.719 "code": -22, 00:33:25.719 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:33:25.719 } 00:33:25.719 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:25.719 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:33:25.719 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:25.719 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:25.719 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:25.719 13:53:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:33:26.654 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:26.654 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:26.654 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:26.654 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:26.654 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:26.654 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:26.654 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:26.654 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:26.654 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:26.654 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:26.654 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:26.654 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:26.654 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.654 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:26.654 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.654 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:26.654 "name": "raid_bdev1", 00:33:26.654 "uuid": "9dde5cb5-4088-4160-82c5-cb6661c43855", 00:33:26.654 "strip_size_kb": 0, 00:33:26.654 "state": "online", 00:33:26.654 "raid_level": "raid1", 00:33:26.654 "superblock": true, 00:33:26.654 "num_base_bdevs": 2, 00:33:26.654 "num_base_bdevs_discovered": 1, 00:33:26.654 "num_base_bdevs_operational": 1, 00:33:26.654 "base_bdevs_list": [ 00:33:26.654 { 00:33:26.654 "name": null, 00:33:26.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:26.654 "is_configured": false, 00:33:26.654 "data_offset": 0, 00:33:26.654 "data_size": 7936 00:33:26.654 }, 00:33:26.654 { 00:33:26.654 "name": "BaseBdev2", 00:33:26.654 "uuid": "e5e2e90e-18fb-5a5b-994d-d6306b1e34cc", 00:33:26.654 "is_configured": true, 00:33:26.654 "data_offset": 256, 00:33:26.654 "data_size": 7936 00:33:26.654 } 00:33:26.654 ] 00:33:26.654 }' 00:33:26.654 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:26.654 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:27.222 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:27.222 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:27.222 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:27.222 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:27.222 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:27.222 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:27.222 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.222 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:27.222 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:27.222 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.222 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:27.222 "name": "raid_bdev1", 00:33:27.222 "uuid": "9dde5cb5-4088-4160-82c5-cb6661c43855", 00:33:27.222 "strip_size_kb": 0, 00:33:27.222 "state": "online", 00:33:27.222 "raid_level": "raid1", 00:33:27.222 "superblock": true, 00:33:27.222 "num_base_bdevs": 2, 00:33:27.222 "num_base_bdevs_discovered": 1, 00:33:27.222 "num_base_bdevs_operational": 1, 00:33:27.222 "base_bdevs_list": [ 00:33:27.222 { 00:33:27.222 "name": null, 00:33:27.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:27.222 "is_configured": false, 00:33:27.222 "data_offset": 0, 00:33:27.222 "data_size": 7936 00:33:27.222 }, 00:33:27.222 { 00:33:27.222 "name": "BaseBdev2", 00:33:27.222 "uuid": "e5e2e90e-18fb-5a5b-994d-d6306b1e34cc", 00:33:27.222 "is_configured": true, 00:33:27.222 "data_offset": 256, 00:33:27.222 "data_size": 7936 00:33:27.222 } 00:33:27.222 ] 00:33:27.222 }' 00:33:27.222 13:53:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:27.222 13:53:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:27.222 13:53:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:27.222 13:53:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:27.222 13:53:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89741 00:33:27.222 13:53:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89741 ']' 00:33:27.222 13:53:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89741 00:33:27.222 13:53:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:33:27.222 13:53:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:27.222 13:53:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89741 00:33:27.222 killing process with pid 89741 00:33:27.222 Received shutdown signal, test time was about 60.000000 seconds 00:33:27.222 00:33:27.222 Latency(us) 00:33:27.222 [2024-11-20T13:53:30.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.222 [2024-11-20T13:53:30.139Z] =================================================================================================================== 00:33:27.222 [2024-11-20T13:53:30.139Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:27.222 13:53:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:27.222 13:53:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:27.222 13:53:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89741' 00:33:27.222 13:53:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89741 00:33:27.222 [2024-11-20 13:53:30.112593] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:27.222 13:53:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89741 00:33:27.222 [2024-11-20 13:53:30.112773] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:27.222 [2024-11-20 13:53:30.112841] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:27.222 [2024-11-20 13:53:30.112862] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:33:27.481 [2024-11-20 13:53:30.337872] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:28.417 13:53:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:33:28.417 00:33:28.417 real 0m18.528s 00:33:28.417 user 0m25.354s 00:33:28.417 sys 0m1.531s 00:33:28.417 13:53:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:28.417 ************************************ 00:33:28.417 END TEST raid_rebuild_test_sb_md_interleaved 00:33:28.417 ************************************ 00:33:28.417 13:53:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:28.417 13:53:31 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:33:28.417 13:53:31 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:33:28.417 13:53:31 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89741 ']' 00:33:28.417 13:53:31 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89741 00:33:28.417 13:53:31 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:33:28.417 ************************************ 00:33:28.417 END TEST bdev_raid 00:33:28.417 ************************************ 00:33:28.417 00:33:28.417 real 13m12.185s 00:33:28.417 user 18m37.963s 00:33:28.417 sys 1m50.279s 00:33:28.417 13:53:31 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:28.417 13:53:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:28.676 13:53:31 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:33:28.676 13:53:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:28.676 13:53:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:28.676 13:53:31 -- common/autotest_common.sh@10 -- # set +x 00:33:28.676 ************************************ 00:33:28.676 START TEST spdkcli_raid 00:33:28.676 ************************************ 00:33:28.676 13:53:31 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:33:28.676 * Looking for test storage... 00:33:28.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:33:28.676 13:53:31 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:28.676 13:53:31 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:33:28.676 13:53:31 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:28.676 13:53:31 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:28.676 13:53:31 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:33:28.676 13:53:31 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:28.676 13:53:31 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:28.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.676 --rc genhtml_branch_coverage=1 00:33:28.676 --rc genhtml_function_coverage=1 00:33:28.676 --rc genhtml_legend=1 00:33:28.676 --rc geninfo_all_blocks=1 00:33:28.676 --rc geninfo_unexecuted_blocks=1 00:33:28.676 00:33:28.676 ' 00:33:28.676 13:53:31 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:28.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.676 --rc genhtml_branch_coverage=1 00:33:28.676 --rc genhtml_function_coverage=1 00:33:28.676 --rc genhtml_legend=1 00:33:28.676 --rc geninfo_all_blocks=1 00:33:28.676 --rc geninfo_unexecuted_blocks=1 00:33:28.676 00:33:28.676 ' 00:33:28.676 13:53:31 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:28.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.676 --rc genhtml_branch_coverage=1 00:33:28.676 --rc genhtml_function_coverage=1 00:33:28.676 --rc genhtml_legend=1 00:33:28.676 --rc geninfo_all_blocks=1 00:33:28.676 --rc geninfo_unexecuted_blocks=1 00:33:28.676 00:33:28.676 ' 00:33:28.676 13:53:31 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:28.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.676 --rc genhtml_branch_coverage=1 00:33:28.676 --rc genhtml_function_coverage=1 00:33:28.676 --rc genhtml_legend=1 00:33:28.676 --rc geninfo_all_blocks=1 00:33:28.676 --rc geninfo_unexecuted_blocks=1 00:33:28.676 00:33:28.676 ' 00:33:28.676 13:53:31 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:33:28.676 13:53:31 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:33:28.676 13:53:31 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:33:28.676 13:53:31 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:33:28.676 13:53:31 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:33:28.676 13:53:31 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:33:28.676 13:53:31 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:33:28.676 13:53:31 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:33:28.676 13:53:31 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:33:28.676 13:53:31 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:33:28.676 13:53:31 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:33:28.676 13:53:31 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:33:28.676 13:53:31 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:33:28.676 13:53:31 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:33:28.676 13:53:31 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:33:28.676 13:53:31 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:33:28.676 13:53:31 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:33:28.676 13:53:31 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:33:28.676 13:53:31 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:33:28.676 13:53:31 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:33:28.676 13:53:31 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:33:28.676 13:53:31 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:33:28.676 13:53:31 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:33:28.676 13:53:31 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:33:28.676 13:53:31 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:33:28.676 13:53:31 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:33:28.676 13:53:31 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:33:28.676 13:53:31 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:33:28.676 13:53:31 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:33:28.676 13:53:31 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:33:28.677 13:53:31 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:33:28.677 13:53:31 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:33:28.677 13:53:31 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:33:28.677 13:53:31 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:28.677 13:53:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:28.934 13:53:31 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:33:28.934 13:53:31 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90424 00:33:28.934 13:53:31 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:33:28.934 13:53:31 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90424 00:33:28.934 13:53:31 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90424 ']' 00:33:28.934 13:53:31 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:28.934 13:53:31 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:28.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:28.935 13:53:31 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:28.935 13:53:31 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:28.935 13:53:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:28.935 [2024-11-20 13:53:31.741322] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:33:28.935 [2024-11-20 13:53:31.741794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90424 ] 00:33:29.192 [2024-11-20 13:53:31.932631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:29.192 [2024-11-20 13:53:32.072865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:29.192 [2024-11-20 13:53:32.072884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:30.124 13:53:32 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:30.124 13:53:32 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:33:30.124 13:53:33 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:33:30.124 13:53:33 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:30.124 13:53:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:30.382 13:53:33 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:33:30.382 13:53:33 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:30.382 13:53:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:30.382 13:53:33 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:30.382 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:30.382 ' 00:33:31.757 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:33:31.757 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:33:32.015 13:53:34 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:33:32.015 13:53:34 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:32.015 13:53:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:32.015 13:53:34 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:33:32.015 13:53:34 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:32.015 13:53:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:32.015 13:53:34 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:33:32.015 ' 00:33:33.391 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:33:33.391 13:53:36 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:33:33.391 13:53:36 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:33.391 13:53:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:33.391 13:53:36 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:33:33.391 13:53:36 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:33.391 13:53:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:33.391 13:53:36 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:33:33.391 13:53:36 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:33:33.957 13:53:36 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:33:33.957 13:53:36 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:33:33.957 13:53:36 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:33:33.957 13:53:36 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:33.957 13:53:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:33.957 13:53:36 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:33:33.957 13:53:36 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:33.957 13:53:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:33.957 13:53:36 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:33:33.957 ' 00:33:35.333 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:33:35.333 13:53:37 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:33:35.333 13:53:37 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:35.333 13:53:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:35.333 13:53:37 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:33:35.333 13:53:37 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:35.333 13:53:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:35.333 13:53:37 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:33:35.333 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:33:35.333 ' 00:33:36.707 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:33:36.707 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:33:36.707 13:53:39 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:33:36.707 13:53:39 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:36.707 13:53:39 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:36.707 13:53:39 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90424 00:33:36.707 13:53:39 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90424 ']' 00:33:36.707 13:53:39 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90424 00:33:36.707 13:53:39 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:33:36.707 13:53:39 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:36.707 13:53:39 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90424 00:33:36.965 killing process with pid 90424 00:33:36.965 13:53:39 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:36.965 13:53:39 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:36.965 13:53:39 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90424' 00:33:36.965 13:53:39 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90424 00:33:36.965 13:53:39 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90424 00:33:39.500 13:53:42 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:33:39.500 13:53:42 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90424 ']' 00:33:39.500 13:53:42 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90424 00:33:39.500 Process with pid 90424 is not found 00:33:39.500 13:53:42 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90424 ']' 00:33:39.500 13:53:42 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90424 00:33:39.500 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90424) - No such process 00:33:39.500 13:53:42 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90424 is not found' 00:33:39.500 13:53:42 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:33:39.500 13:53:42 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:39.500 13:53:42 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:39.500 13:53:42 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:39.500 ************************************ 00:33:39.500 END TEST spdkcli_raid 00:33:39.500 ************************************ 00:33:39.500 00:33:39.500 real 0m10.634s 00:33:39.500 user 0m21.916s 00:33:39.500 sys 0m1.336s 00:33:39.500 13:53:42 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:39.500 13:53:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:33:39.500 13:53:42 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:33:39.500 13:53:42 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:39.500 13:53:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:39.500 13:53:42 -- common/autotest_common.sh@10 -- # set +x 00:33:39.500 ************************************ 00:33:39.500 START TEST blockdev_raid5f 00:33:39.500 ************************************ 00:33:39.500 13:53:42 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:33:39.500 * Looking for test storage... 00:33:39.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:33:39.500 13:53:42 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:39.500 13:53:42 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:33:39.500 13:53:42 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:39.500 13:53:42 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:39.500 13:53:42 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:33:39.500 13:53:42 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:39.500 13:53:42 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:39.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.500 --rc genhtml_branch_coverage=1 00:33:39.500 --rc genhtml_function_coverage=1 00:33:39.500 --rc genhtml_legend=1 00:33:39.500 --rc geninfo_all_blocks=1 00:33:39.500 --rc geninfo_unexecuted_blocks=1 00:33:39.500 00:33:39.500 ' 00:33:39.500 13:53:42 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:39.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.500 --rc genhtml_branch_coverage=1 00:33:39.500 --rc genhtml_function_coverage=1 00:33:39.500 --rc genhtml_legend=1 00:33:39.500 --rc geninfo_all_blocks=1 00:33:39.500 --rc geninfo_unexecuted_blocks=1 00:33:39.500 00:33:39.500 ' 00:33:39.500 13:53:42 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:39.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.500 --rc genhtml_branch_coverage=1 00:33:39.500 --rc genhtml_function_coverage=1 00:33:39.500 --rc genhtml_legend=1 00:33:39.500 --rc geninfo_all_blocks=1 00:33:39.500 --rc geninfo_unexecuted_blocks=1 00:33:39.500 00:33:39.500 ' 00:33:39.500 13:53:42 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:39.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.500 --rc genhtml_branch_coverage=1 00:33:39.500 --rc genhtml_function_coverage=1 00:33:39.500 --rc genhtml_legend=1 00:33:39.500 --rc geninfo_all_blocks=1 00:33:39.500 --rc geninfo_unexecuted_blocks=1 00:33:39.500 00:33:39.500 ' 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90704 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:33:39.501 13:53:42 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90704 00:33:39.501 13:53:42 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90704 ']' 00:33:39.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:39.501 13:53:42 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:39.501 13:53:42 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:39.501 13:53:42 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:39.501 13:53:42 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:39.501 13:53:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:39.763 [2024-11-20 13:53:42.428264] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:33:39.763 [2024-11-20 13:53:42.428459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90704 ] 00:33:39.763 [2024-11-20 13:53:42.620409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.024 [2024-11-20 13:53:42.775559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.960 13:53:43 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:40.960 13:53:43 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:33:40.960 13:53:43 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:33:40.960 13:53:43 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:33:40.960 13:53:43 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:33:40.960 13:53:43 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.960 13:53:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:40.960 Malloc0 00:33:40.960 Malloc1 00:33:40.960 Malloc2 00:33:40.960 13:53:43 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.960 13:53:43 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:33:40.960 13:53:43 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.960 13:53:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:40.960 13:53:43 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.960 13:53:43 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:33:40.960 13:53:43 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:33:40.961 13:53:43 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.961 13:53:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:40.961 13:53:43 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.961 13:53:43 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:33:40.961 13:53:43 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.961 13:53:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:41.219 13:53:43 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.219 13:53:43 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:33:41.219 13:53:43 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.219 13:53:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:41.219 13:53:43 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.219 13:53:43 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:33:41.219 13:53:43 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:33:41.219 13:53:43 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:33:41.219 13:53:43 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.219 13:53:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:41.219 13:53:43 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.219 13:53:43 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:33:41.219 13:53:43 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:33:41.220 13:53:43 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "f4815913-bbec-4f25-a3ca-e722138e4c88"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f4815913-bbec-4f25-a3ca-e722138e4c88",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "f4815913-bbec-4f25-a3ca-e722138e4c88",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "7ea36fa0-c295-4c32-a00b-a735af03dfd8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3cd75eb1-8865-49e4-bc83-2c43b94f93c7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "de61dc9a-b937-46c3-8bd8-b50909ca1bd8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:33:41.220 13:53:44 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:33:41.220 13:53:44 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:33:41.220 13:53:44 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:33:41.220 13:53:44 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90704 00:33:41.220 13:53:44 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90704 ']' 00:33:41.220 13:53:44 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90704 00:33:41.220 13:53:44 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:33:41.220 13:53:44 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:41.220 13:53:44 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90704 00:33:41.220 killing process with pid 90704 00:33:41.220 13:53:44 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:41.220 13:53:44 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:41.220 13:53:44 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90704' 00:33:41.220 13:53:44 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90704 00:33:41.220 13:53:44 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90704 00:33:44.508 13:53:46 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:44.508 13:53:46 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:33:44.508 13:53:46 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:33:44.508 13:53:46 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:44.508 13:53:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:44.508 ************************************ 00:33:44.508 START TEST bdev_hello_world 00:33:44.508 ************************************ 00:33:44.508 13:53:46 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:33:44.508 [2024-11-20 13:53:46.808960] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:33:44.508 [2024-11-20 13:53:46.809207] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90772 ] 00:33:44.508 [2024-11-20 13:53:46.987880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.508 [2024-11-20 13:53:47.125098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:45.075 [2024-11-20 13:53:47.697313] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:33:45.075 [2024-11-20 13:53:47.697370] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:33:45.075 [2024-11-20 13:53:47.697410] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:33:45.075 [2024-11-20 13:53:47.698095] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:33:45.075 [2024-11-20 13:53:47.698293] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:33:45.075 [2024-11-20 13:53:47.698328] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:33:45.075 [2024-11-20 13:53:47.698399] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:33:45.075 00:33:45.075 [2024-11-20 13:53:47.698428] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:33:46.464 00:33:46.464 real 0m2.411s 00:33:46.464 user 0m1.970s 00:33:46.464 sys 0m0.311s 00:33:46.464 13:53:49 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:46.464 ************************************ 00:33:46.464 END TEST bdev_hello_world 00:33:46.464 ************************************ 00:33:46.464 13:53:49 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:33:46.464 13:53:49 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:33:46.464 13:53:49 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:46.464 13:53:49 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:46.464 13:53:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:46.464 ************************************ 00:33:46.464 START TEST bdev_bounds 00:33:46.464 ************************************ 00:33:46.464 13:53:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:33:46.464 13:53:49 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90814 00:33:46.464 13:53:49 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:33:46.464 Process bdevio pid: 90814 00:33:46.464 13:53:49 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90814' 00:33:46.464 13:53:49 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90814 00:33:46.464 13:53:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90814 ']' 00:33:46.464 13:53:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.464 13:53:49 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:33:46.464 13:53:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:46.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.464 13:53:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.464 13:53:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:46.464 13:53:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:33:46.464 [2024-11-20 13:53:49.307194] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:33:46.464 [2024-11-20 13:53:49.307737] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90814 ] 00:33:46.724 [2024-11-20 13:53:49.506143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:46.982 [2024-11-20 13:53:49.669281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:46.982 [2024-11-20 13:53:49.669413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:46.982 [2024-11-20 13:53:49.669418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:47.550 13:53:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:47.550 13:53:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:33:47.550 13:53:50 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:33:47.550 I/O targets: 00:33:47.550 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:33:47.550 00:33:47.550 00:33:47.550 CUnit - A unit testing framework for C - Version 2.1-3 00:33:47.550 http://cunit.sourceforge.net/ 00:33:47.550 00:33:47.550 00:33:47.550 Suite: bdevio tests on: raid5f 00:33:47.550 Test: blockdev write read block ...passed 00:33:47.550 Test: blockdev write zeroes read block ...passed 00:33:47.550 Test: blockdev write zeroes read no split ...passed 00:33:47.809 Test: blockdev write zeroes read split ...passed 00:33:47.809 Test: blockdev write zeroes read split partial ...passed 00:33:47.809 Test: blockdev reset ...passed 00:33:47.809 Test: blockdev write read 8 blocks ...passed 00:33:47.809 Test: blockdev write read size > 128k ...passed 00:33:47.809 Test: blockdev write read invalid size ...passed 00:33:47.809 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:47.809 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:47.809 Test: blockdev write read max offset ...passed 00:33:47.809 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:47.809 Test: blockdev writev readv 8 blocks ...passed 00:33:47.809 Test: blockdev writev readv 30 x 1block ...passed 00:33:47.809 Test: blockdev writev readv block ...passed 00:33:47.809 Test: blockdev writev readv size > 128k ...passed 00:33:47.809 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:47.809 Test: blockdev comparev and writev ...passed 00:33:47.809 Test: blockdev nvme passthru rw ...passed 00:33:47.809 Test: blockdev nvme passthru vendor specific ...passed 00:33:47.809 Test: blockdev nvme admin passthru ...passed 00:33:47.809 Test: blockdev copy ...passed 00:33:47.809 00:33:47.809 Run Summary: Type Total Ran Passed Failed Inactive 00:33:47.809 suites 1 1 n/a 0 0 00:33:47.809 tests 23 23 23 0 0 00:33:47.809 asserts 130 130 130 0 n/a 00:33:47.809 00:33:47.809 Elapsed time = 0.646 seconds 00:33:47.809 0 00:33:47.809 13:53:50 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90814 00:33:47.809 13:53:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90814 ']' 00:33:47.809 13:53:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90814 00:33:47.809 13:53:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:33:48.068 13:53:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:48.068 13:53:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90814 00:33:48.068 13:53:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:48.068 13:53:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:48.068 13:53:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90814' 00:33:48.068 killing process with pid 90814 00:33:48.068 13:53:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90814 00:33:48.068 13:53:50 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90814 00:33:49.446 13:53:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:33:49.446 00:33:49.446 real 0m2.976s 00:33:49.446 user 0m7.242s 00:33:49.446 sys 0m0.479s 00:33:49.446 13:53:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:49.446 13:53:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:33:49.446 ************************************ 00:33:49.446 END TEST bdev_bounds 00:33:49.446 ************************************ 00:33:49.446 13:53:52 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:33:49.446 13:53:52 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:49.446 13:53:52 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:49.446 13:53:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:49.446 ************************************ 00:33:49.446 START TEST bdev_nbd 00:33:49.446 ************************************ 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90879 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90879 /var/tmp/spdk-nbd.sock 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90879 ']' 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:33:49.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:49.446 13:53:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:33:49.446 [2024-11-20 13:53:52.354615] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:33:49.446 [2024-11-20 13:53:52.354845] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:49.705 [2024-11-20 13:53:52.550189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.964 [2024-11-20 13:53:52.689899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:50.530 13:53:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:50.530 13:53:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:33:50.530 13:53:53 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:33:50.530 13:53:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:50.530 13:53:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:33:50.530 13:53:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:33:50.530 13:53:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:33:50.530 13:53:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:50.530 13:53:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:33:50.530 13:53:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:33:50.530 13:53:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:33:50.530 13:53:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:33:50.530 13:53:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:33:50.530 13:53:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:33:50.530 13:53:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:33:50.787 13:53:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:33:50.787 13:53:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:33:50.787 13:53:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:33:50.787 13:53:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:33:50.787 13:53:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:33:50.787 13:53:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:50.787 13:53:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:50.787 13:53:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:33:50.787 13:53:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:33:50.787 13:53:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:50.787 13:53:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:50.787 13:53:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:51.045 1+0 records in 00:33:51.045 1+0 records out 00:33:51.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629424 s, 6.5 MB/s 00:33:51.045 13:53:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:51.045 13:53:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:33:51.045 13:53:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:51.045 13:53:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:51.045 13:53:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:33:51.045 13:53:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:51.045 13:53:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:33:51.045 13:53:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:51.327 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:33:51.327 { 00:33:51.327 "nbd_device": "/dev/nbd0", 00:33:51.327 "bdev_name": "raid5f" 00:33:51.327 } 00:33:51.327 ]' 00:33:51.327 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:33:51.327 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:33:51.327 { 00:33:51.327 "nbd_device": "/dev/nbd0", 00:33:51.327 "bdev_name": "raid5f" 00:33:51.327 } 00:33:51.327 ]' 00:33:51.327 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:33:51.327 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:51.327 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:51.327 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:51.327 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:51.327 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:51.327 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:51.327 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:51.585 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:51.585 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:51.585 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:51.585 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:51.585 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:51.585 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:51.585 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:51.585 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:51.585 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:51.585 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:51.585 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:51.843 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:51.843 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:51.843 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:51.843 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:51.843 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:33:51.843 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:51.843 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:33:51.843 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:33:51.843 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:33:51.843 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:33:51.843 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:33:51.843 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:33:51.843 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:33:51.843 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:51.843 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:33:51.843 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:33:51.843 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:33:51.843 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:33:51.843 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:33:51.843 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:51.843 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:33:51.844 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:51.844 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:51.844 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:51.844 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:33:51.844 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:51.844 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:51.844 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:33:52.102 /dev/nbd0 00:33:52.102 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:52.102 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:52.102 13:53:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:33:52.102 13:53:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:33:52.102 13:53:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:52.102 13:53:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:52.102 13:53:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:33:52.102 13:53:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:33:52.102 13:53:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:52.102 13:53:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:52.102 13:53:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:52.102 1+0 records in 00:33:52.102 1+0 records out 00:33:52.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322925 s, 12.7 MB/s 00:33:52.102 13:53:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:52.102 13:53:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:33:52.102 13:53:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:52.102 13:53:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:52.102 13:53:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:33:52.102 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:52.102 13:53:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:52.102 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:52.102 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:52.102 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:33:52.671 { 00:33:52.671 "nbd_device": "/dev/nbd0", 00:33:52.671 "bdev_name": "raid5f" 00:33:52.671 } 00:33:52.671 ]' 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:33:52.671 { 00:33:52.671 "nbd_device": "/dev/nbd0", 00:33:52.671 "bdev_name": "raid5f" 00:33:52.671 } 00:33:52.671 ]' 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:33:52.671 256+0 records in 00:33:52.671 256+0 records out 00:33:52.671 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104057 s, 101 MB/s 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:33:52.671 256+0 records in 00:33:52.671 256+0 records out 00:33:52.671 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0425121 s, 24.7 MB/s 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:52.671 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:52.930 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:52.930 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:52.930 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:52.930 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:52.930 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:52.930 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:52.930 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:52.930 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:52.930 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:52.930 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:52.930 13:53:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:53.497 13:53:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:53.497 13:53:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:53.497 13:53:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:53.497 13:53:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:53.497 13:53:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:33:53.497 13:53:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:53.497 13:53:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:33:53.497 13:53:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:33:53.497 13:53:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:33:53.497 13:53:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:33:53.497 13:53:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:33:53.497 13:53:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:33:53.497 13:53:56 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:53.497 13:53:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:53.497 13:53:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:33:53.497 13:53:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:33:53.756 malloc_lvol_verify 00:33:53.756 13:53:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:33:54.015 88d2536b-c2d0-4aa9-85a9-eff1c9ed8642 00:33:54.015 13:53:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:33:54.273 f765d691-fedf-4519-8baf-daa686fe538b 00:33:54.273 13:53:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:33:54.532 /dev/nbd0 00:33:54.532 13:53:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:33:54.532 13:53:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:33:54.532 13:53:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:33:54.532 13:53:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:33:54.532 13:53:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:33:54.532 mke2fs 1.47.0 (5-Feb-2023) 00:33:54.532 Discarding device blocks: 0/4096 done 00:33:54.532 Creating filesystem with 4096 1k blocks and 1024 inodes 00:33:54.532 00:33:54.532 Allocating group tables: 0/1 done 00:33:54.532 Writing inode tables: 0/1 done 00:33:54.790 Creating journal (1024 blocks): done 00:33:54.790 Writing superblocks and filesystem accounting information: 0/1 done 00:33:54.790 00:33:54.790 13:53:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:54.791 13:53:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:54.791 13:53:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:54.791 13:53:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:54.791 13:53:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:54.791 13:53:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:54.791 13:53:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:55.049 13:53:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:55.049 13:53:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:55.049 13:53:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:55.049 13:53:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:55.049 13:53:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:55.049 13:53:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:55.049 13:53:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:55.049 13:53:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:55.049 13:53:57 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90879 00:33:55.049 13:53:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90879 ']' 00:33:55.049 13:53:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90879 00:33:55.049 13:53:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:33:55.049 13:53:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:55.049 13:53:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90879 00:33:55.049 killing process with pid 90879 00:33:55.049 13:53:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:55.049 13:53:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:55.049 13:53:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90879' 00:33:55.049 13:53:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90879 00:33:55.049 13:53:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90879 00:33:56.424 ************************************ 00:33:56.424 END TEST bdev_nbd 00:33:56.424 ************************************ 00:33:56.424 13:53:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:33:56.424 00:33:56.424 real 0m7.017s 00:33:56.424 user 0m10.127s 00:33:56.424 sys 0m1.524s 00:33:56.424 13:53:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:56.424 13:53:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:33:56.424 13:53:59 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:33:56.424 13:53:59 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:33:56.424 13:53:59 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:33:56.424 13:53:59 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:33:56.424 13:53:59 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:56.424 13:53:59 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:56.424 13:53:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:33:56.424 ************************************ 00:33:56.424 START TEST bdev_fio 00:33:56.424 ************************************ 00:33:56.424 13:53:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:33:56.424 13:53:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:33:56.424 13:53:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:33:56.424 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:33:56.424 13:53:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:33:56.424 13:53:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:33:56.424 13:53:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:33:56.424 13:53:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:33:56.424 13:53:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:33:56.424 13:53:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:56.424 13:53:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:33:56.424 13:53:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:33:56.424 13:53:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:33:56.424 13:53:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:33:56.424 13:53:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:33:56.424 13:53:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:33:56.424 13:53:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:33:56.424 13:53:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:56.424 13:53:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:33:56.424 13:53:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:33:56.424 13:53:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:33:56.424 13:53:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:33:56.424 13:53:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:33:56.682 13:53:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:33:56.682 13:53:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:33:56.683 ************************************ 00:33:56.683 START TEST bdev_fio_rw_verify 00:33:56.683 ************************************ 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:33:56.683 13:53:59 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:56.941 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:56.941 fio-3.35 00:33:56.941 Starting 1 thread 00:34:09.176 00:34:09.176 job_raid5f: (groupid=0, jobs=1): err= 0: pid=91092: Wed Nov 20 13:54:10 2024 00:34:09.176 read: IOPS=8284, BW=32.4MiB/s (33.9MB/s)(324MiB/10000msec) 00:34:09.176 slat (usec): min=21, max=588, avg=30.17, stdev= 8.42 00:34:09.176 clat (usec): min=12, max=1288, avg=191.89, stdev=78.35 00:34:09.176 lat (usec): min=39, max=1317, avg=222.06, stdev=80.66 00:34:09.176 clat percentiles (usec): 00:34:09.176 | 50.000th=[ 190], 99.000th=[ 379], 99.900th=[ 529], 99.990th=[ 734], 00:34:09.176 | 99.999th=[ 1287] 00:34:09.176 write: IOPS=8717, BW=34.1MiB/s (35.7MB/s)(336MiB/9871msec); 0 zone resets 00:34:09.176 slat (usec): min=10, max=2186, avg=23.62, stdev=10.79 00:34:09.176 clat (usec): min=83, max=3314, avg=443.19, stdev=84.65 00:34:09.176 lat (usec): min=114, max=3335, avg=466.82, stdev=88.19 00:34:09.176 clat percentiles (usec): 00:34:09.176 | 50.000th=[ 441], 99.000th=[ 668], 99.900th=[ 1037], 99.990th=[ 1221], 00:34:09.176 | 99.999th=[ 3326] 00:34:09.176 bw ( KiB/s): min=31240, max=41904, per=98.13%, avg=34218.53, stdev=2875.41, samples=19 00:34:09.176 iops : min= 7810, max=10476, avg=8554.63, stdev=718.85, samples=19 00:34:09.176 lat (usec) : 20=0.01%, 100=6.70%, 250=30.43%, 500=52.35%, 750=10.18% 00:34:09.176 lat (usec) : 1000=0.28% 00:34:09.176 lat (msec) : 2=0.06%, 4=0.01% 00:34:09.176 cpu : usr=98.51%, sys=0.61%, ctx=36, majf=0, minf=7244 00:34:09.176 IO depths : 1=7.8%, 2=20.0%, 4=55.1%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:09.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.176 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.176 issued rwts: total=82840,86055,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:09.176 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:09.176 00:34:09.176 Run status group 0 (all jobs): 00:34:09.176 READ: bw=32.4MiB/s (33.9MB/s), 32.4MiB/s-32.4MiB/s (33.9MB/s-33.9MB/s), io=324MiB (339MB), run=10000-10000msec 00:34:09.176 WRITE: bw=34.1MiB/s (35.7MB/s), 34.1MiB/s-34.1MiB/s (35.7MB/s-35.7MB/s), io=336MiB (352MB), run=9871-9871msec 00:34:09.176 ----------------------------------------------------- 00:34:09.176 Suppressions used: 00:34:09.176 count bytes template 00:34:09.176 1 7 /usr/src/fio/parse.c 00:34:09.176 635 60960 /usr/src/fio/iolog.c 00:34:09.176 1 8 libtcmalloc_minimal.so 00:34:09.176 1 904 libcrypto.so 00:34:09.176 ----------------------------------------------------- 00:34:09.176 00:34:09.176 00:34:09.176 real 0m12.662s 00:34:09.176 user 0m12.869s 00:34:09.176 sys 0m0.816s 00:34:09.176 13:54:12 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:09.176 13:54:12 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:34:09.176 ************************************ 00:34:09.176 END TEST bdev_fio_rw_verify 00:34:09.176 ************************************ 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "f4815913-bbec-4f25-a3ca-e722138e4c88"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f4815913-bbec-4f25-a3ca-e722138e4c88",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "f4815913-bbec-4f25-a3ca-e722138e4c88",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "7ea36fa0-c295-4c32-a00b-a735af03dfd8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3cd75eb1-8865-49e4-bc83-2c43b94f93c7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "de61dc9a-b937-46c3-8bd8-b50909ca1bd8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:34:09.435 /home/vagrant/spdk_repo/spdk 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:34:09.435 00:34:09.435 real 0m12.896s 00:34:09.435 user 0m12.987s 00:34:09.435 sys 0m0.913s 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:09.435 13:54:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:34:09.435 ************************************ 00:34:09.435 END TEST bdev_fio 00:34:09.435 ************************************ 00:34:09.435 13:54:12 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:09.435 13:54:12 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:09.435 13:54:12 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:34:09.435 13:54:12 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:09.435 13:54:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:09.435 ************************************ 00:34:09.435 START TEST bdev_verify 00:34:09.435 ************************************ 00:34:09.435 13:54:12 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:09.694 [2024-11-20 13:54:12.360069] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:34:09.694 [2024-11-20 13:54:12.360242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91252 ] 00:34:09.694 [2024-11-20 13:54:12.548069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:09.953 [2024-11-20 13:54:12.682852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:09.953 [2024-11-20 13:54:12.683432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:10.520 Running I/O for 5 seconds... 00:34:12.392 8915.00 IOPS, 34.82 MiB/s [2024-11-20T13:54:16.686Z] 8755.50 IOPS, 34.20 MiB/s [2024-11-20T13:54:17.622Z] 8584.00 IOPS, 33.53 MiB/s [2024-11-20T13:54:18.558Z] 8556.75 IOPS, 33.42 MiB/s [2024-11-20T13:54:18.558Z] 8564.60 IOPS, 33.46 MiB/s 00:34:15.641 Latency(us) 00:34:15.641 [2024-11-20T13:54:18.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:15.641 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:15.641 Verification LBA range: start 0x0 length 0x2000 00:34:15.641 raid5f : 5.02 4301.94 16.80 0.00 0.00 44871.30 595.78 39559.91 00:34:15.641 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:15.641 Verification LBA range: start 0x2000 length 0x2000 00:34:15.641 raid5f : 5.03 4270.77 16.68 0.00 0.00 45365.99 192.70 39559.91 00:34:15.641 [2024-11-20T13:54:18.558Z] =================================================================================================================== 00:34:15.641 [2024-11-20T13:54:18.558Z] Total : 8572.71 33.49 0.00 0.00 45117.95 192.70 39559.91 00:34:17.018 00:34:17.018 real 0m7.519s 00:34:17.018 user 0m13.724s 00:34:17.018 sys 0m0.363s 00:34:17.018 13:54:19 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:17.018 ************************************ 00:34:17.018 END TEST bdev_verify 00:34:17.018 13:54:19 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:34:17.018 ************************************ 00:34:17.018 13:54:19 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:17.018 13:54:19 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:34:17.018 13:54:19 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:17.018 13:54:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:17.018 ************************************ 00:34:17.018 START TEST bdev_verify_big_io 00:34:17.018 ************************************ 00:34:17.018 13:54:19 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:17.277 [2024-11-20 13:54:19.939967] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:34:17.277 [2024-11-20 13:54:19.940157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91348 ] 00:34:17.277 [2024-11-20 13:54:20.133622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:17.536 [2024-11-20 13:54:20.286707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:17.536 [2024-11-20 13:54:20.286739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:18.105 Running I/O for 5 seconds... 00:34:20.049 442.00 IOPS, 27.62 MiB/s [2024-11-20T13:54:24.342Z] 507.00 IOPS, 31.69 MiB/s [2024-11-20T13:54:25.278Z] 549.00 IOPS, 34.31 MiB/s [2024-11-20T13:54:26.214Z] 554.75 IOPS, 34.67 MiB/s [2024-11-20T13:54:26.473Z] 583.40 IOPS, 36.46 MiB/s 00:34:23.556 Latency(us) 00:34:23.556 [2024-11-20T13:54:26.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:23.556 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:23.556 Verification LBA range: start 0x0 length 0x200 00:34:23.556 raid5f : 5.28 288.76 18.05 0.00 0.00 10949957.47 336.99 491877.47 00:34:23.556 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:23.556 Verification LBA range: start 0x200 length 0x200 00:34:23.556 raid5f : 5.40 305.10 19.07 0.00 0.00 10560928.40 213.18 474718.95 00:34:23.556 [2024-11-20T13:54:26.473Z] =================================================================================================================== 00:34:23.556 [2024-11-20T13:54:26.473Z] Total : 593.86 37.12 0.00 0.00 10747780.05 213.18 491877.47 00:34:24.931 00:34:24.931 real 0m8.015s 00:34:24.931 user 0m14.642s 00:34:24.931 sys 0m0.398s 00:34:24.931 13:54:27 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:24.931 13:54:27 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:34:24.931 ************************************ 00:34:24.931 END TEST bdev_verify_big_io 00:34:24.931 ************************************ 00:34:25.190 13:54:27 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:25.190 13:54:27 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:34:25.190 13:54:27 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:25.190 13:54:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:25.190 ************************************ 00:34:25.190 START TEST bdev_write_zeroes 00:34:25.190 ************************************ 00:34:25.190 13:54:27 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:25.190 [2024-11-20 13:54:27.992457] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:34:25.190 [2024-11-20 13:54:27.993353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91447 ] 00:34:25.448 [2024-11-20 13:54:28.194640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:25.448 [2024-11-20 13:54:28.341930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:26.014 Running I/O for 1 seconds... 00:34:27.390 23967.00 IOPS, 93.62 MiB/s 00:34:27.390 Latency(us) 00:34:27.390 [2024-11-20T13:54:30.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:27.390 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:27.390 raid5f : 1.01 23944.61 93.53 0.00 0.00 5327.01 1675.64 7566.43 00:34:27.390 [2024-11-20T13:54:30.307Z] =================================================================================================================== 00:34:27.390 [2024-11-20T13:54:30.307Z] Total : 23944.61 93.53 0.00 0.00 5327.01 1675.64 7566.43 00:34:28.327 00:34:28.327 real 0m3.178s 00:34:28.327 user 0m2.718s 00:34:28.327 sys 0m0.325s 00:34:28.327 13:54:31 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:28.327 13:54:31 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:34:28.327 ************************************ 00:34:28.327 END TEST bdev_write_zeroes 00:34:28.327 ************************************ 00:34:28.327 13:54:31 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:28.327 13:54:31 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:34:28.327 13:54:31 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:28.327 13:54:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:28.327 ************************************ 00:34:28.327 START TEST bdev_json_nonenclosed 00:34:28.327 ************************************ 00:34:28.327 13:54:31 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:28.327 [2024-11-20 13:54:31.215522] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:34:28.327 [2024-11-20 13:54:31.215659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91501 ] 00:34:28.584 [2024-11-20 13:54:31.375178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:28.584 [2024-11-20 13:54:31.486881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:28.584 [2024-11-20 13:54:31.487046] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:34:28.584 [2024-11-20 13:54:31.487100] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:28.584 [2024-11-20 13:54:31.487114] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:28.842 00:34:28.842 real 0m0.591s 00:34:28.842 user 0m0.367s 00:34:28.842 sys 0m0.120s 00:34:28.842 13:54:31 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:28.842 13:54:31 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:34:28.842 ************************************ 00:34:28.842 END TEST bdev_json_nonenclosed 00:34:28.842 ************************************ 00:34:29.100 13:54:31 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:29.100 13:54:31 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:34:29.100 13:54:31 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:29.100 13:54:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:29.100 ************************************ 00:34:29.100 START TEST bdev_json_nonarray 00:34:29.100 ************************************ 00:34:29.100 13:54:31 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:29.100 [2024-11-20 13:54:31.857649] Starting SPDK v25.01-pre git sha1 fa4f4fd15 / DPDK 24.03.0 initialization... 00:34:29.100 [2024-11-20 13:54:31.857822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91531 ] 00:34:29.359 [2024-11-20 13:54:32.026136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:29.359 [2024-11-20 13:54:32.137688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:29.359 [2024-11-20 13:54:32.137847] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:34:29.359 [2024-11-20 13:54:32.137876] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:29.359 [2024-11-20 13:54:32.137916] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:29.618 00:34:29.618 real 0m0.600s 00:34:29.618 user 0m0.380s 00:34:29.618 sys 0m0.116s 00:34:29.618 13:54:32 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:29.618 13:54:32 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:34:29.618 ************************************ 00:34:29.618 END TEST bdev_json_nonarray 00:34:29.618 ************************************ 00:34:29.618 13:54:32 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:34:29.618 13:54:32 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:34:29.618 13:54:32 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:34:29.618 13:54:32 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:34:29.618 13:54:32 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:34:29.618 13:54:32 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:34:29.618 13:54:32 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:29.618 13:54:32 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:34:29.618 13:54:32 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:34:29.618 13:54:32 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:34:29.618 13:54:32 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:34:29.618 00:34:29.618 real 0m50.364s 00:34:29.618 user 1m8.882s 00:34:29.618 sys 0m5.650s 00:34:29.618 13:54:32 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:29.618 13:54:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:34:29.618 ************************************ 00:34:29.618 END TEST blockdev_raid5f 00:34:29.618 ************************************ 00:34:29.618 13:54:32 -- spdk/autotest.sh@194 -- # uname -s 00:34:29.618 13:54:32 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:34:29.618 13:54:32 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:34:29.618 13:54:32 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:34:29.618 13:54:32 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:34:29.618 13:54:32 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:34:29.618 13:54:32 -- spdk/autotest.sh@260 -- # timing_exit lib 00:34:29.618 13:54:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:29.618 13:54:32 -- common/autotest_common.sh@10 -- # set +x 00:34:29.618 13:54:32 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:34:29.618 13:54:32 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:34:29.618 13:54:32 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:34:29.618 13:54:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:34:29.618 13:54:32 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:34:29.618 13:54:32 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:34:29.618 13:54:32 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:34:29.618 13:54:32 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:34:29.618 13:54:32 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:34:29.618 13:54:32 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:29.618 13:54:32 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:29.618 13:54:32 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:29.618 13:54:32 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:29.618 13:54:32 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:29.618 13:54:32 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:29.618 13:54:32 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:29.618 13:54:32 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:29.618 13:54:32 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:29.618 13:54:32 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:29.618 13:54:32 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:29.618 13:54:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:29.618 13:54:32 -- common/autotest_common.sh@10 -- # set +x 00:34:29.883 13:54:32 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:29.883 13:54:32 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:29.884 13:54:32 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:29.884 13:54:32 -- common/autotest_common.sh@10 -- # set +x 00:34:31.263 INFO: APP EXITING 00:34:31.263 INFO: killing all VMs 00:34:31.263 INFO: killing vhost app 00:34:31.263 INFO: EXIT DONE 00:34:31.830 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:31.830 Waiting for block devices as requested 00:34:31.830 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:31.830 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:32.765 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:32.765 Cleaning 00:34:32.765 Removing: /var/run/dpdk/spdk0/config 00:34:32.765 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:32.765 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:32.765 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:32.765 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:32.765 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:32.765 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:32.765 Removing: /dev/shm/spdk_tgt_trace.pid57002 00:34:32.765 Removing: /var/run/dpdk/spdk0 00:34:32.765 Removing: /var/run/dpdk/spdk_pid56768 00:34:32.765 Removing: /var/run/dpdk/spdk_pid57002 00:34:32.765 Removing: /var/run/dpdk/spdk_pid57232 00:34:32.765 Removing: /var/run/dpdk/spdk_pid57336 00:34:32.765 Removing: /var/run/dpdk/spdk_pid57387 00:34:32.765 Removing: /var/run/dpdk/spdk_pid57520 00:34:32.765 Removing: /var/run/dpdk/spdk_pid57544 00:34:32.765 Removing: /var/run/dpdk/spdk_pid57748 00:34:32.765 Removing: /var/run/dpdk/spdk_pid57860 00:34:32.765 Removing: /var/run/dpdk/spdk_pid57967 00:34:32.765 Removing: /var/run/dpdk/spdk_pid58089 00:34:32.765 Removing: /var/run/dpdk/spdk_pid58197 00:34:32.765 Removing: /var/run/dpdk/spdk_pid58237 00:34:32.765 Removing: /var/run/dpdk/spdk_pid58273 00:34:32.765 Removing: /var/run/dpdk/spdk_pid58349 00:34:32.765 Removing: /var/run/dpdk/spdk_pid58461 00:34:32.765 Removing: /var/run/dpdk/spdk_pid58930 00:34:32.765 Removing: /var/run/dpdk/spdk_pid59007 00:34:32.765 Removing: /var/run/dpdk/spdk_pid59081 00:34:32.765 Removing: /var/run/dpdk/spdk_pid59098 00:34:32.765 Removing: /var/run/dpdk/spdk_pid59255 00:34:32.765 Removing: /var/run/dpdk/spdk_pid59278 00:34:32.765 Removing: /var/run/dpdk/spdk_pid59427 00:34:32.765 Removing: /var/run/dpdk/spdk_pid59443 00:34:32.765 Removing: /var/run/dpdk/spdk_pid59513 00:34:32.765 Removing: /var/run/dpdk/spdk_pid59536 00:34:32.765 Removing: /var/run/dpdk/spdk_pid59600 00:34:32.765 Removing: /var/run/dpdk/spdk_pid59624 00:34:32.765 Removing: /var/run/dpdk/spdk_pid59819 00:34:32.765 Removing: /var/run/dpdk/spdk_pid59861 00:34:32.765 Removing: /var/run/dpdk/spdk_pid59949 00:34:32.765 Removing: /var/run/dpdk/spdk_pid61321 00:34:32.765 Removing: /var/run/dpdk/spdk_pid61538 00:34:32.765 Removing: /var/run/dpdk/spdk_pid61678 00:34:32.765 Removing: /var/run/dpdk/spdk_pid62338 00:34:32.765 Removing: /var/run/dpdk/spdk_pid62555 00:34:32.765 Removing: /var/run/dpdk/spdk_pid62695 00:34:32.765 Removing: /var/run/dpdk/spdk_pid63360 00:34:32.765 Removing: /var/run/dpdk/spdk_pid63696 00:34:32.765 Removing: /var/run/dpdk/spdk_pid63846 00:34:32.765 Removing: /var/run/dpdk/spdk_pid65260 00:34:32.765 Removing: /var/run/dpdk/spdk_pid65517 00:34:32.765 Removing: /var/run/dpdk/spdk_pid65664 00:34:32.765 Removing: /var/run/dpdk/spdk_pid67077 00:34:32.765 Removing: /var/run/dpdk/spdk_pid67336 00:34:32.765 Removing: /var/run/dpdk/spdk_pid67476 00:34:32.765 Removing: /var/run/dpdk/spdk_pid68900 00:34:32.765 Removing: /var/run/dpdk/spdk_pid69352 00:34:32.765 Removing: /var/run/dpdk/spdk_pid69498 00:34:32.765 Removing: /var/run/dpdk/spdk_pid71015 00:34:32.765 Removing: /var/run/dpdk/spdk_pid71281 00:34:32.765 Removing: /var/run/dpdk/spdk_pid71432 00:34:32.765 Removing: /var/run/dpdk/spdk_pid72956 00:34:32.765 Removing: /var/run/dpdk/spdk_pid73229 00:34:32.765 Removing: /var/run/dpdk/spdk_pid73375 00:34:32.765 Removing: /var/run/dpdk/spdk_pid74894 00:34:32.765 Removing: /var/run/dpdk/spdk_pid75398 00:34:32.765 Removing: /var/run/dpdk/spdk_pid75544 00:34:32.765 Removing: /var/run/dpdk/spdk_pid75693 00:34:33.023 Removing: /var/run/dpdk/spdk_pid76141 00:34:33.023 Removing: /var/run/dpdk/spdk_pid76910 00:34:33.023 Removing: /var/run/dpdk/spdk_pid77297 00:34:33.023 Removing: /var/run/dpdk/spdk_pid77998 00:34:33.023 Removing: /var/run/dpdk/spdk_pid78483 00:34:33.023 Removing: /var/run/dpdk/spdk_pid79281 00:34:33.023 Removing: /var/run/dpdk/spdk_pid79700 00:34:33.023 Removing: /var/run/dpdk/spdk_pid81720 00:34:33.023 Removing: /var/run/dpdk/spdk_pid82180 00:34:33.023 Removing: /var/run/dpdk/spdk_pid82632 00:34:33.023 Removing: /var/run/dpdk/spdk_pid84766 00:34:33.023 Removing: /var/run/dpdk/spdk_pid85257 00:34:33.023 Removing: /var/run/dpdk/spdk_pid85763 00:34:33.023 Removing: /var/run/dpdk/spdk_pid86838 00:34:33.023 Removing: /var/run/dpdk/spdk_pid87172 00:34:33.023 Removing: /var/run/dpdk/spdk_pid88128 00:34:33.023 Removing: /var/run/dpdk/spdk_pid88456 00:34:33.023 Removing: /var/run/dpdk/spdk_pid89417 00:34:33.023 Removing: /var/run/dpdk/spdk_pid89741 00:34:33.023 Removing: /var/run/dpdk/spdk_pid90424 00:34:33.023 Removing: /var/run/dpdk/spdk_pid90704 00:34:33.023 Removing: /var/run/dpdk/spdk_pid90772 00:34:33.023 Removing: /var/run/dpdk/spdk_pid90814 00:34:33.023 Removing: /var/run/dpdk/spdk_pid91080 00:34:33.023 Removing: /var/run/dpdk/spdk_pid91252 00:34:33.023 Removing: /var/run/dpdk/spdk_pid91348 00:34:33.023 Removing: /var/run/dpdk/spdk_pid91447 00:34:33.023 Removing: /var/run/dpdk/spdk_pid91501 00:34:33.023 Removing: /var/run/dpdk/spdk_pid91531 00:34:33.023 Clean 00:34:33.024 13:54:35 -- common/autotest_common.sh@1453 -- # return 0 00:34:33.024 13:54:35 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:33.024 13:54:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:33.024 13:54:35 -- common/autotest_common.sh@10 -- # set +x 00:34:33.024 13:54:35 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:33.024 13:54:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:33.024 13:54:35 -- common/autotest_common.sh@10 -- # set +x 00:34:33.024 13:54:35 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:33.024 13:54:35 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:33.024 13:54:35 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:33.024 13:54:35 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:33.024 13:54:35 -- spdk/autotest.sh@398 -- # hostname 00:34:33.024 13:54:35 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:33.282 geninfo: WARNING: invalid characters removed from testname! 00:34:59.874 13:55:00 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:01.781 13:55:04 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:04.315 13:55:07 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:06.850 13:55:09 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:10.139 13:55:12 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:12.048 13:55:14 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:15.336 13:55:17 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:15.336 13:55:17 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:15.336 13:55:17 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:35:15.336 13:55:17 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:15.336 13:55:17 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:15.336 13:55:17 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:15.336 + [[ -n 5374 ]] 00:35:15.336 + sudo kill 5374 00:35:15.345 [Pipeline] } 00:35:15.362 [Pipeline] // timeout 00:35:15.367 [Pipeline] } 00:35:15.382 [Pipeline] // stage 00:35:15.387 [Pipeline] } 00:35:15.401 [Pipeline] // catchError 00:35:15.411 [Pipeline] stage 00:35:15.413 [Pipeline] { (Stop VM) 00:35:15.426 [Pipeline] sh 00:35:15.707 + vagrant halt 00:35:19.918 ==> default: Halting domain... 00:35:26.505 [Pipeline] sh 00:35:26.818 + vagrant destroy -f 00:35:30.105 ==> default: Removing domain... 00:35:30.376 [Pipeline] sh 00:35:30.657 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:35:30.668 [Pipeline] } 00:35:30.686 [Pipeline] // stage 00:35:30.692 [Pipeline] } 00:35:30.710 [Pipeline] // dir 00:35:30.717 [Pipeline] } 00:35:30.733 [Pipeline] // wrap 00:35:30.741 [Pipeline] } 00:35:30.756 [Pipeline] // catchError 00:35:30.767 [Pipeline] stage 00:35:30.770 [Pipeline] { (Epilogue) 00:35:30.785 [Pipeline] sh 00:35:31.069 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:37.709 [Pipeline] catchError 00:35:37.711 [Pipeline] { 00:35:37.725 [Pipeline] sh 00:35:38.006 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:38.007 Artifacts sizes are good 00:35:38.016 [Pipeline] } 00:35:38.030 [Pipeline] // catchError 00:35:38.041 [Pipeline] archiveArtifacts 00:35:38.048 Archiving artifacts 00:35:38.189 [Pipeline] cleanWs 00:35:38.211 [WS-CLEANUP] Deleting project workspace... 00:35:38.211 [WS-CLEANUP] Deferred wipeout is used... 00:35:38.216 [WS-CLEANUP] done 00:35:38.218 [Pipeline] } 00:35:38.233 [Pipeline] // stage 00:35:38.238 [Pipeline] } 00:35:38.252 [Pipeline] // node 00:35:38.256 [Pipeline] End of Pipeline 00:35:38.295 Finished: SUCCESS